Newsletter Subject

Redistributing Data Across Files (2019-11-07)

From

sqlservercentral.com

Email Address

subscriptions@sqlservercentral.com

Sent On

Thu, Nov 7, 2019 01:06 PM

Email Preheader Text

 SQLServerCentral Newsletter for November 7, 2019 Problems displaying this newsletter? . Featured

 SQLServerCentral Newsletter for November 7, 2019 Problems displaying this newsletter? [View online](. [SQL Server Central]( Featured Contents - [Redistributing Data Across Files]( - [Introduction to DevOps: Database Delivery]( - [How Fiducia & GAD scaled with SQL Monitor]( - [From the SQL Server Central Blogs - That slippery slope]( - [From the SQL Server Central Blogs - Removing SSMS Completion Time–#SQLNewBlogger]( Question of the Day - [Ending a line with a Backslash]( Featured Script - [T-SQL query to copy data from one database to another database]( [Redgate University]( The Voice of the DBA  How Did I Do? I made [a number of predictions for the opening keynote]( at the PASS Summit. I wrote the predictions a week before the event, making my own guesses about what might be revealed. I was busy in pre-conference sessions on Monday and Tuesday, so I had to write something well in advance of the event. I was flying home yesterday, so I had no chance to watch the keynote until later, so this is a look back after catching up Wednesday evening at home. How did I do? With the Ignite conference taking place this week, I had guessed that SQL Server 2019 would release there and then be a part of the Summit keynote. I was right. On Monday, as I was taking a break during the SQL in the City Summit, I saw that SQL Server 2019 had been released. A milestone, and perhaps one that I think changes the data platform. It's an inflection point, and I agree with [this tweet]( from Distinguished Engineer, Slava Oks. Rohan Kumar did announce it at PASS as well, so maybe half credit here? I didn't mention [Hyperscale]( yesterday, as I wasn't sure what I knew as NDA and what was possible. The demonstration of [Azure Arc]( and [Hyperscale for PostgreSQL](, is pretty cool. While I used to think Hyperscale was only for the largest of the large databases, I now think that many smaller, 2-4TB systems (I can't believe I think of that as small), might consider moving to Hyperscale and preparing for future growth into the low single digits of TBs. BTW, Azure Arc is amazing. I knew about it previously, but didn't expect it this year. I did mention Azure Stack deploying on premise, and I think the Azure Arc covers this, so 1 1/2 right. [Azure Synapse]( was something I didn't expect. I have been looking at Big Data Clusters as a game changer with the relational platform. Synapse combines the Azure SQL Data Warehouse and Data Lakes into a platform. Since these were both the places where we first got compute separated from storage, this makes sense. Both of these platforms seemed similar to me, so combining them and letting customers drop structured and unstructured data. [The Edge]( is a bigger deal than I thought. This was mentioned as a private preview during Build, and I should have guessed this would move to public preview. This as one of those areas that I haven't been as focused on and wasn't sure about information I've gotten under NDA, so I didn't include this. Certainly a miss here, but I do think the demo in the keynote with Zeiss and making decisions from sensor data is something more organizations will do. Sensors generate too much data and trying to move all that across a network to the cloud doesn't make sense. Train models and let some low power SQL Server Edge engine make a few decisions without moving data. There was more in the keynote, and it's worth watching. If nothing else, I think both the opening Ignite keynote and the opening Summit keynote are inspiring for us as data professionals. You can dream about possibilities and truly see that data is an important asset that Microsoft is empowering us to make better decisions about based on analytics. Steve Jones - SSC Editor [Join the debate, and respond to today's editorial on the forums]( [Redgate SQL Prompt](   Featured Contents [Redistributing Data Across Files]( Steve Jones - SSC Editor from SQLServerCentral When adding files to a filegroup, SQL Server doesn't automatically balance the data across the files immediately. See how this works and how you can speed this up. [Introduction to DevOps: Database Delivery]( Additional Articles from SimpleTalk Before including the database in your DevOps pipeline, you’ll need to decide if you are going to use a state-based or migrations-based approach. In this article, Robert Sheldon explains the difference between them and discusses the benefits of each. [How Fiducia & GAD scaled with SQL Monitor]( Additional Articles from Redgate When growth was expected on the horizon for Fiducia & GAD, hiring more DBAs was not an option. With SQL Monitor they were able to scale at an unprecedented rate- 150 new servers and 70+ applications in one year. From the SQL Server Central Blogs - [That slippery slope]( Kenneth.Fisher from SQLStudies We need to have processes and rules. They are important to any business. That said, exceptions are also going to ... Continue reading From the SQL Server Central Blogs - [Removing SSMS Completion Time–#SQLNewBlogger]( Steve Jones - SSC Editor from The Voice of the DBA Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers. If you’ve upgraded your SSMS lately, you might...   Question of the Day Today's question (by Thom A):  Ending a line with a Backslash You have the below Procedure, that dynamically inserts data into the Table specified by the parameter `@TargetTable`: CREATE PROC dbo.InsertFilePath @TargetTable sysname, @DirectoryPath NVARCHAR(255), @FileName NVARCHAR(255) AS BEGIN SELECT @TargetTable = t.name FROM sys.schemas AS s JOIN sys.tables AS t ON s.schema_id = t.schema_id WHERE s.name = N'dbo' AND t.name = @TargetTable; DECLARE @SQL NVARCHAR(MAX); SET @SQL = N' INSERT INTO dbo.' + QUOTENAME(@TargetTable) + N' (DirectoryPath, [FileName]) VALUES(@DirectoryPath, --This should be the full directory path, with the closing . For example \\ShareServer\ShareName\Directory\ @FileName); --This should only be the name of the file, for example MyImage.jpg'; EXEC sp_executesql @SQL, N'@DirectoryPath nvarchar(255), @FileName nvarchar(255)', @DirectoryPath, @FileName; END; Assuming the table `ImageFiles` exists on the `dbo` schema, what would be the result of the below SQL? EXEC dbo.InsertFilePath @TargetTable = N'ImageFiles', @DirectoryPath = 'Share01PublicCats', @FileName = 'SleepyGeorge.png'; Think you know the answer? [Click here]( and find out if you are right.    Yesterday's Question of the Day (by Steve Jones - SSC Editor) Importing CSV with Numpy I have this data in a text file: Steve,DEN,LHR,5 Kendra,PDX,LHR,4 Grant,BOS,LHR,7 Steve,DEN,SYD,2 I have imported the numpy module as np and now want to read this data into an array using that module. What method should I use?  Answer: flights = np.genfromtxt('flight.csv',delimiter=',',dtype=("|S10","|S10","|S10",int)) Explanation: The numpy module has a genfromtxt() method that will read in a csv. Ref: numpy.genfromtxt - [Discuss this question and answer on the forums](  Featured Script [T-SQL query to copy data from one database to another database]( Brahmanand Shukla from SQLServerCentral T-SQL query to copy the rows of all the tables from one database to another database only if they have rows in Source DB and exact same Schema in both Source and Destination DB. /* Author : Brahmanand Shukla Date : 28-Oct-2019 Purpose : T-SQL query to copy the rows of all the tables from one database to another database only if they have rows in Source DB and exact same Schema in both Source and Destination DB. */ ; WITH cte_All_Tables_With_Foreign_Key -- Get all the tables (from Destination DB) having foreign key. Ignore the self-referencing. AS ( SELECT PAR_SCH.[name] AS [Parent_Schema_Name] , PAR_TAB.[name] AS [Parent_Table_Name] , REF_SCH.[name] AS [Referenced_Schema_Name] , REF_TAB.[name] AS [Referenced_Table_Name] , FK.[parent_object_id] AS [parent_object_id] , FK.[referenced_object_id] AS [referenced_object_id] FROM destination.sys.foreign_keys FK INNER JOIN destination.sys.tables PAR_TAB ON PAR_TAB.[object_id] = FK.[parent_object_id] INNER JOIN destination.sys.schemas PAR_SCH ON PAR_SCH.[schema_id] = PAR_TAB.[schema_id] INNER JOIN destination.sys.tables REF_TAB ON REF_TAB.[object_id] = FK.[referenced_object_id] INNER JOIN destination.sys.schemas REF_SCH ON REF_SCH.[schema_id] = REF_TAB.[schema_id] WHERE FK.[type] = 'F' AND FK.[parent_object_id] <> [referenced_object_id] AND PAR_TAB.type = 'U' AND REF_TAB.type = 'U' ) , cte_Find_All_Referenced_Tables_In_Sequence /* Recursive CTE : Find the sequence of each referenced table. For e.g Table1 is referenced with Table2 and Table2 is referenced with Table3 then Table3 should be assigned Sequence as 1, Table2 should be assigned Sequence as 2 and Table1 should be assigned Sequence as 3 */ AS ( SELECT FK1.[Parent_Schema_Name] , FK1.[Parent_Table_Name] , FK1.[Referenced_Schema_Name] , FK1.[Referenced_Table_Name] , FK1.[parent_object_id] , FK1.[referenced_object_id] , 1 AS [Iteration_Sequence_No] FROM cte_All_Tables_With_Foreign_Key FK1 LEFT JOIN cte_All_Tables_With_Foreign_Key FK2 ON FK1.[parent_object_id] = FK2.[referenced_object_id] WHERE FK2.[parent_object_id] IS NULL UNION ALL SELECT FK.[Parent_Schema_Name] , FK.[Parent_Table_Name] , FK.[Referenced_Schema_Name] , FK.[Referenced_Table_Name] , FK.[parent_object_id] , FK.[referenced_object_id] , CTE.[Iteration_Sequence_No] + 1 AS [Iteration_Sequence_No] FROM cte_All_Tables_With_Foreign_Key FK INNER JOIN cte_Find_All_Referenced_Tables_In_Sequence CTE ON FK.[parent_object_id] = CTE.[referenced_object_id] WHERE FK.[referenced_object_id] <> CTE.[parent_object_id] ) , cte_All_Tables /* Merge all tables (such as Tables with Foreign Key and Tables without Foreign Key) along with their sort order. Tables without Foreign Key and Tables with root level Foreign Key should be given the least sort order */ AS ( SELECT [Referenced_Schema_Name] AS [TABLE_SCHEMA] , [Referenced_Table_Name] AS [TABLE_NAME] , 1 AS [SORT_ORDER] , [Iteration_Sequence_No] FROM cte_Find_All_Referenced_Tables_In_Sequence UNION SELECT STGA.[Parent_Schema_Name] AS [TABLE_SCHEMA] , STGA.[Parent_Table_Name] AS [TABLE_NAME] , 2 AS [SORT_ORDER] , 0 AS [Iteration_Sequence_No] FROM cte_Find_All_Referenced_Tables_In_Sequence STGA LEFT JOIN cte_Find_All_Referenced_Tables_In_Sequence STGB ON STGB.[referenced_object_id] = STGA.[parent_object_id] WHERE STGB.[Iteration_Sequence_No] IS NULL UNION ALL SELECT SCH.[name] AS [TABLE_SCHEMA] , TAB.[name] AS [TABLE_NAME] , 3 AS [SORT_ORDER] , 0 AS [Iteration_Sequence_No] FROM destination.sys.tables TAB INNER JOIN destination.sys.schemas SCH ON SCH.[schema_id] = TAB.[schema_id] LEFT JOIN cte_Find_All_Referenced_Tables_In_Sequence STGB ON (STGB.[parent_object_id] = TAB.[object_id] OR STGB.[referenced_object_id] = TAB.[object_id]) WHERE STGB.[Iteration_Sequence_No] IS NULL ) , cte_Source_Table /* Get all the tables (from Source DB) along with following attributes : 1) Full Name in the format .. 2) No of columns 3) Complete column list to be used in "INSERT INTO Destination... SELECT .... FROM Source" 4) Checksum of all the important column attributes forming the Table Schema. This will be used to compare the tables between Source DB and Destination DB having exact Schema. *** Table having no rows will be filtered out / ignored. */ AS ( SELECT [TAB].[TABLE_CATALOG] , [TAB].[TABLE_SCHEMA] , [TAB].[TABLE_NAME] , QUOTENAME([TAB].[TABLE_CATALOG]) + '.' + QUOTENAME([TAB].[TABLE_SCHEMA]) + '.' + QUOTENAME([TAB].[TABLE_NAME]) AS [TABLE_FULL_NAME] , [COL].[NO_OF_COLUMNS] , STUFF( ( SELECT ', ' + QUOTENAME(COL.[COLUMN_NAME]) + ' ' FROM source.INFORMATION_SCHEMA.COLUMNS COL WHERE COL.[TABLE_SCHEMA] = TAB.[TABLE_SCHEMA] AND COL.[TABLE_NAME] = TAB.[TABLE_NAME] ORDER BY COL.[ORDINAL_POSITION] ASC FOR XML PATH ('') ), 1, 1, '') AS [COLUMN_NAME_LIST] , COL.[COLUMNS_CHECKSUM] FROM source.INFORMATION_SCHEMA.TABLES [TAB] CROSS APPLY ( SELECT COUNT(1) AS [NO_OF_COLUMNS] , SUM(CAST(CHECKSUM([COLUMN_DEFAULT], [IS_NULLABLE], [DATA_TYPE], [CHARACTER_MAXIMUM_LENGTH] , [CHARACTER_OCTET_LENGTH], [NUMERIC_PRECISION], [NUMERIC_PRECISION_RADIX] , [NUMERIC_SCALE], [DATETIME_PRECISION], [CHARACTER_SET_NAME], [COLLATION_NAME]) AS NUMERIC)) AS [COLUMNS_CHECKSUM] FROM source.INFORMATION_SCHEMA.COLUMNS [COL] WHERE [COL].[TABLE_SCHEMA] = [TAB].[TABLE_SCHEMA] AND [COL].[TABLE_NAME] = [TAB].[TABLE_NAME] ) [COL] WHERE EXISTS ( SELECT 1 FROM source.sys.tables T INNER JOIN source.sys.schemas S ON S.[schema_id] = T.[schema_id] INNER JOIN source.sys.partitions P ON P.[object_id] = T.[object_id] WHERE S.[name] = [TAB].[TABLE_SCHEMA] AND T.[name] = [TAB].[TABLE_NAME] AND P.[index_id] < 2 HAVING SUM(P.[rows]) > 0 ) ) , cte_Destination_Table /* Get all the tables (from Destination DB) along with following attributes : 1) Full Name in the format .. 2) No of columns 3) Checksum of all the important column attributes forming the Table Schema. This will be used to compare the tables between Source DB and Destination DB having exact Schema. 4) Attributes indicating whether the table has IDENTITY column. This is to manage the IDENTITY_INSERT ON/OFF */ AS ( SELECT [TAB].[TABLE_CATALOG] , [TAB].[TABLE_SCHEMA] , [TAB].[TABLE_NAME] , QUOTENAME([TAB].[TABLE_CATALOG]) + '.' + QUOTENAME([TAB].[TABLE_SCHEMA]) + '.' + QUOTENAME([TAB].[TABLE_NAME]) AS [TABLE_FULL_NAME] , [COL].[NO_OF_COLUMNS] , CASE WHEN EXISTS ( SELECT 1 FROM destination.sys.tables T INNER JOIN destination.sys.schemas S ON S.[schema_id] = T.[schema_id] INNER JOIN destination.sys.identity_columns I ON I.[object_id] = T.[object_id] WHERE S.[name] = [TAB].[TABLE_SCHEMA] AND T.[name] = [TAB].[TABLE_NAME] AND I.[is_identity] = 1 ) THEN 1 ELSE 0 END AS [TABLE_HAS_IDENTITY_COLUMN] , STUFF( ( SELECT ', ' + QUOTENAME(COL.[COLUMN_NAME]) + ' ' FROM destination.INFORMATION_SCHEMA.COLUMNS COL WHERE COL.[TABLE_SCHEMA] = TAB.[TABLE_SCHEMA] AND COL.[TABLE_NAME] = TAB.[TABLE_NAME] ORDER BY COL.[ORDINAL_POSITION] ASC FOR XML PATH ('') ), 1, 1, '') AS [COLUMN_NAME_LIST] , COL.[COLUMNS_CHECKSUM] FROM destination.INFORMATION_SCHEMA.TABLES [TAB] CROSS APPLY ( SELECT COUNT(1) AS [NO_OF_COLUMNS] , SUM(CAST(CHECKSUM([COLUMN_DEFAULT], [IS_NULLABLE], [DATA_TYPE], [CHARACTER_MAXIMUM_LENGTH] , [CHARACTER_OCTET_LENGTH], [NUMERIC_PRECISION], [NUMERIC_PRECISION_RADIX] , [NUMERIC_SCALE], [DATETIME_PRECISION], [CHARACTER_SET_NAME], [COLLATION_NAME]) AS NUMERIC)) AS [COLUMNS_CHECKSUM] FROM destination.INFORMATION_SCHEMA.COLUMNS [COL] WHERE [COL].[TABLE_SCHEMA] = [TAB].[TABLE_SCHEMA] AND [COL].[TABLE_NAME] = [TAB].[TABLE_NAME] ) [COL] ) /* Following SELECT statement will compare the tables between Source DB and Destination DB and fetch the tables having exact same Schema *** Salient features of the T-SQL "INSERT INTO Destination... SELECT .... FROM Source" generated from the below SELECT query : 1) "SET XACT_ABORT ON" is used to rollback the transaction in case of error 2) "TRY .. CATCH" block is used for each INSERT to catch the errors 3) "IDENTITY_INSERT" setting in the generated query is dynamic 4) All the rows from Destination DB to Source DB will written in the same Sort Order as defined in Primary Key in the Source DB 5) Generated T-SQL query will be well formatted and each INSERT statement shall be properly commented for better readability 6) Generated T-SQL query shall also include the statement to show the status of execution of each query viz. Started, Completed or Failed *** Disclaimer : Executing the generated T-SQL query multiple times will cause the duplication of rows hence take the necessary caution. However, this can be made dynamic and rerunnable if all the tables has the Primary Key. */ SELECT ROW_NUMBER() OVER (ORDER BY TBL_SEQ.[Sort_Order] ASC, TBL_SEQ.[Iteration_Sequence_No] DESC) AS [TABLE_SEQUENCE_NO] , [DEST_TAB].[TABLE_SCHEMA] AS [TABLE_SCHEMA] , [DEST_TAB].[TABLE_NAME] AS [TABLE_NAME] , (CASE WHEN ROW_NUMBER() OVER (ORDER BY TBL_SEQ.[Sort_Order] ASC, TBL_SEQ.[Iteration_Sequence_No] DESC) = 1 THEN 'SET XACT_ABORT ON;' + CHAR(13) + CHAR(10) + 'SET NOCOUNT ON;' ELSE '' END + CHAR(13) + CHAR(10) + '-- ***************************************** INSERT SCRIPT FOR ' + [DEST_TAB].[TABLE_FULL_NAME] + ' -******* START *****************' + CHAR(13) + CHAR(10) + 'SELECT ''INSERT SCRIPT EXECUTION STARTED FOR ' + [DEST_TAB].[TABLE_FULL_NAME] + '''' + CHAR(13) + CHAR(10) + 'BEGIN TRY' + CHAR(13) + CHAR(10) + CASE WHEN [DEST_TAB].[TABLE_HAS_IDENTITY_COLUMN] = 1 THEN CHAR(9) + 'SET IDENTITY_INSERT ' + [DEST_TAB].[TABLE_FULL_NAME] + ' ON;' + CHAR(13) + CHAR(10) ELSE '' END + CHAR(9) + 'INSERT INTO ' + [DEST_TAB].[TABLE_FULL_NAME] + '(' + DEST_TAB.[COLUMN_NAME_LIST] + ')'+ CHAR(13) + CHAR(10) + CHAR(9) + 'SELECT ' + DEST_TAB.[COLUMN_NAME_LIST] + ' FROM ' + [SRC_TAB].[TABLE_FULL_NAME] + CHAR(13) + CHAR(10) + CHAR(9) + CASE WHEN [SRC_PK].[CONSTRAINT_NAME] IS NOT NULL THEN 'ORDER BY' + STUFF( ( SELECT ', ' + QUOTENAME(COL.[name]) + ' ' + CASE WHEN IX_COL.[is_descending_key] = 1 THEN 'DESC' ELSE 'ASC' END FROM source.sys.indexes IX INNER JOIN source.sys.index_columns IX_COL ON IX_COL.[object_id] = IX.[object_id] INNER JOIN source.sys.columns COL ON COL.[object_id] = IX_COL.[object_id] AND COL.[column_id] = IX_COL.[column_id] WHERE IX.[name] = [SRC_PK].[CONSTRAINT_NAME] FOR XML PATH ('') ), 1, 1, '') ELSE '' END + CHAR(13) + CHAR(10) + CASE WHEN [DEST_TAB].[TABLE_HAS_IDENTITY_COLUMN] = 1 THEN CHAR(9) + 'SET IDENTITY_INSERT ' + [DEST_TAB].[TABLE_FULL_NAME] + ' OFF;' + CHAR(13) + CHAR(10) ELSE '' END + CHAR(9) + 'SELECT ''INSERT SCRIPT EXECUTION COMPLETED FOR ' + [DEST_TAB].[TABLE_FULL_NAME] + '''' + CHAR(13) + CHAR(10) + 'END TRY' + CHAR(13) + CHAR(10) + 'BEGIN CATCH' + CHAR(13) + CHAR(10) + CHAR(9) + 'SELECT ''INSERT SCRIPT EXECUTION FAILED FOR ' + [DEST_TAB].[TABLE_FULL_NAME] + '''' + CHAR(13) + CHAR(10) + 'END CATCH' + CHAR(13) + CHAR(10) + '-- ***************************************** INSERT SCRIPT FOR ' + [DEST_TAB].[TABLE_FULL_NAME] + ' -******* END *****************' + CHAR(13) + CHAR(10) + 'GO') AS [TSQL_INSERT_SCRIPT] FROM cte_Source_Table [SRC_TAB] INNER JOIN cte_Destination_Table [DEST_TAB] ON [DEST_TAB].[TABLE_SCHEMA] = [SRC_TAB].[TABLE_SCHEMA] AND [DEST_TAB].[TABLE_NAME] = [SRC_TAB].[TABLE_NAME] AND [DEST_TAB].[NO_OF_COLUMNS] = [SRC_TAB].[NO_OF_COLUMNS] INNER JOIN cte_All_Tables TBL_SEQ ON [TBL_SEQ].[TABLE_SCHEMA] = [SRC_TAB].[TABLE_SCHEMA] AND [TBL_SEQ].[TABLE_NAME] = [SRC_TAB].[TABLE_NAME] LEFT JOIN source.INFORMATION_SCHEMA.TABLE_CONSTRAINTS [SRC_PK] ON [SRC_PK].[TABLE_SCHEMA] = [SRC_TAB].[TABLE_SCHEMA] AND [SRC_PK].[TABLE_NAME] = [SRC_TAB].[TABLE_NAME] AND [SRC_PK].[CONSTRAINT_TYPE] = 'PRIMARY KEY' WHERE [SRC_TAB].[COLUMNS_CHECKSUM] = [DEST_TAB].[COLUMNS_CHECKSUM] OPTION (MAXRECURSION 10) [More »](  Database Pros Who Need Your Help Here's a few of the new posts today on the forums. To see more, [visit the forums](. --------------------------------------------------------------- SQL Server 2017 - Administration [DB access monitoring]( - Hi All, I need to monitor a application (service user) action on SQL server DB e.g when application is querying predefined tables and columns.I do not want to user SQL Server DB audit as log file size is increasing and getting bigger in size in a day. Is there a way to get the info […] SQL Server 2017 - Development [Date Dimension Table and Query]( - I have a date dimension with the following data: WarrantyId   QuarterName   StartDate 889840      Y1Q1          2019-07-01 00:00:00.000 889840      Y1Q2          2019-10-01 00:00:00.000 889840      Y1Q3 […] SQL Server 2016 - Administration [SQL Copy Database Wizard Broken]( - I have been using the SQL Copy Database Wizard (right click on the DB and Copy Database option) through the GUI (SMS) to copy a DB up until SQL 2016. Every installation I have where 2016 or greater has been involved the feature is broken and always gives the same error. It doesn't matter if […] [Is there a SQL to show how many total select statements hit a particular table?]( - Since the last server restart or any point of time. Without any auditing or anything like that set up. Can DMVs be used to create such query? Thanks. [Email alert]( - Can extended events send out an email alert ? ex: I have a windows or sql account created on a server . I can track that the account has been created but can I configure and have an email alert sent to me when the account creation happens thanks [Account alert]( - I have a windows service account(AAAA) which gives access to either other windows accounts or creates other sql accounts) in a SQL database server. How can I identify any other accounts which are given access or created in that database server but not created/given access through this windows service account(AAAA). I need to be immediately […] SQL Server 2016 - Development and T-SQL [Using CASE Statement to SET variable when null value]( - Hi , I have a SP , that does an update to add comments to the table. However when there is a Null value it errors out. Im trying to use a Case when to put in a default value if there comment passed are null. Ive put this together but not sure if I […] Development - SQL Server 2014 [SELECT processed before WHERE clause]( - I have a SELECT CAST(Column to int) that has been working OK, on the assumption that the WHERE has already filtered out any rows that would cause the CAST to error. Then I changed the filtering (still excluding any problem rows) - and it failed. I finally realised that the execution plan must have changed, […] SQL 2012 - General [How to enhance this query for good performance ?]( - I work on sql server 2012 and i need to enhance or make this query have good performance this stored procedure work success but i need to know when make drop to temp table and cte plus how to write it with best practice for performance  create Proc ImporterQueue_RunModified As WITH CTE AS ( […] SQL Server 2012 - T-SQL [Get some row depending of Date of nexts rows]( - Hi, Sorry for my bad english. I need to select some rows of a table like that : Id Object  |       Start      |       End A      | 06/11/2019 09:00 | 06/11/2019 09:15 A    […] SQL Server 2008 - General [Connecting to SQL Server 2005 from newer environment]( - Hi, my first question here so please be gentle I have been asked by a potential customer to do some SQL work on their SQL Server 2005 system, but connectivity may be an issue. I have SSMS 18 running on Windows 10 Pro on a Macbook (through Parallels) and I understand that is a […] Reporting Services [Hosting PowerBI in SSRS]( - Overview I have been tasked with the question is there a tool that can host PowerBI reports other than PowerBi Reports Engine? The problem I believe is licensing. We have license for SQL Server Enterprise 2017 that includes SSRS & SSAS. Question Can we host PowerBI Reports in SSRS? I believe I got the answer […] XML [XML Help.. adding a root to the root?]( - I have a XML output that gets output into a file for BizTalk to send to another company, and I'm trying to format the Root clause the way they want but its a bit bizzare and I'm having trouble This is how they want it formatted: ... ... […] TFS/Data Dude/DBPro [TFS change tracking and details.]( - Hi, I had a quick question and I am using TFS 2015. How do I see what is the changeset number of an SP which is deployed just now? How do I find out at what time that Stored procedure or object was checked in? Thanks!  SQL Server 2005 Integration Services [SSIS 2016 ROW counts to a Flat File destination]( - Hi, I have 3 flat file sources. Now I want to count the number of records in each flat file and I want to write the counts to a flat file in the below output. FileName Count can anyone suggest me step by step on how exactly I achieve this ? Many Thanks     [RSS Feed]([Twitter]( This email has been sent to {EMAIL}. To be removed from this list, please click [here](. If you have any problems leaving the list, please contact the webmaster@sqlservercentral.com. This newsletter was sent to you because you signed up at SQLServerCentral.com. ©2019 Redgate Software Ltd, Newnham House, Cambridge Business Park, Cambridge, CB4 0WZ, United Kingdom. All rights reserved. webmaster@sqlservercentral.com  - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

EDM Keywords (266)

zeiss year wrote written write would works work windows well week way watch want voice visit using used use us upgraded update understand tweet tuesday trying trouble transaction track tool together today time thought thom think tasked taking tables table3 table2 table1 table sure storage step status statement ssrs sqlstudies sqlservercentral sqlnewbloggers sql speed sp source something size simpletalk simple signed show set server sequence sent send select see schema scale saw rules rows root rollback right revealed result respond rerunnable removed released referenced redgate records read question query put processes procedure problem previously preparing premise predictions postgresql possible possibilities point please places pass part parallels organizations order option one object numpy numeric number null np newsletter network need nda name move monitor monday module miss milestone might microsoft method mentioned matter manage make made macbook looking line licensing license let later largest know knew keynote issue involved introduction int inspiring insert information increasing including include imported important ignored identify hyperscale horizon home help guesses guessed growth greater gotten got going given get gentle generated forums format focused find filtered file fetch feature failed expected expect execution example exactly exact event errors error enhance ending email either editorial edge duplication dream dmvs discusses difference desc deployed demonstration demo defined decide debate dbas dba db day date database data cte creates created create counts count copy configure compare combining columns cloud closing clause checked changed chance certainly cause catching catch cast case busy business build broken break biztalk benefits believe based backslash auditing assumption asked areas application answer announce amazing agree advance across achieve accounts account able 2016

Marketing emails from sqlservercentral.com

View More
Sent On

11/11/2024

Sent On

28/10/2024

Sent On

16/10/2024

Sent On

09/10/2024

Sent On

07/10/2024

Sent On

05/10/2024

Email Content Statistics

Subscribe Now

Subject Line Length

Data shows that subject lines with 6 to 10 words generated 21 percent higher open rate.

Subscribe Now

Average in this category

Subscribe Now

Number of Words

The more words in the content, the more time the user will need to spend reading. Get straight to the point with catchy short phrases and interesting photos and graphics.

Subscribe Now

Average in this category

Subscribe Now

Number of Images

More images or large images might cause the email to load slower. Aim for a balance of words and images.

Subscribe Now

Average in this category

Subscribe Now

Time to Read

Longer reading time requires more attention and patience from users. Aim for short phrases and catchy keywords.

Subscribe Now

Average in this category

Subscribe Now

Predicted open rate

Subscribe Now

Spam Score

Spam score is determined by a large number of checks performed on the content of the email. For the best delivery results, it is advised to lower your spam score as much as possible.

Subscribe Now

Flesch reading score

Flesch reading score measures how complex a text is. The lower the score, the more difficult the text is to read. The Flesch readability score uses the average length of your sentences (measured by the number of words) and the average number of syllables per word in an equation to calculate the reading ease. Text with a very high Flesch reading ease score (about 100) is straightforward and easy to read, with short sentences and no words of more than two syllables. Usually, a reading ease score of 60-70 is considered acceptable/normal for web copy.

Subscribe Now

Technologies

What powers this email? Every email we receive is parsed to determine the sending ESP and any additional email technologies used.

Subscribe Now

Email Size (not include images)

Font Used

No. Font Name
Subscribe Now

Copyright © 2019–2025 SimilarMail.