SQL Server 2012 :: Getting Results From All Tables In Database
Mar 13, 2014
below is my statement to get data from one column (VARCHAR) from table SUPPLY_ITEM_01
SELECT
@@SERVERNAME as ServerName,DB_NAME() AS DatabaseName,
SUM(CASE WHEN CHARINDEX (CHAR(013) , supplydetail) > 0 THEN 1 ELSE 0 END) AS TotalCHAR013,
SUM(CASE WHEN CHARINDEX (CHAR(012), supplydetail ) >0 THEN 1 ELSE 0 END)AS TotalCHAR012,
SUM(CASE WHEN CHARINDEX (CHAR(010), supplydetail ) >0 THEN 1 ELSE 0 END) AS TotalCHAR010,
SUM(CASE WHEN CHARINDEX (CHAR(009),supplydetail ) >0 THEN 1 ELSE 0 END) AS TotalCHAR009
FROM
[code]...
I need to get result from all the tables and all the columns which has bad data including schemaName, table name and column name in result.
I have this t-sql code which will get some table stats on one database at a time, I was wondering how I would get it to loop through all databases so it will pull the stats from all tables in all databases. Here is my code:
Select object_schema_name(UStat.object_id) + '.' + object_name(UStat.object_id) As [Object Name] ,Case When Sum(User_Updates + User_Seeks + User_Scans + User_Lookups) = 0 Then Null Else Cast(Sum(User_Seeks + User_Scans + User_Lookups) As Decimal) / Cast(Sum(User_Updates
Doing a data migration from one CRM to another and need to get a listing of all entities in current CRM, together with fields and field types. OK, I got the XRMToolBox which gives me that, but I'm hoping there is a sql tool out there that will do the same, plus give me a count on each field of the number of entries in that field in the database.
i need a query to return the top 10 tables in each database on a server. have used EXEC sp_msforeachtable 'sp_spaceused ''?''' which returns what I need for one db but I need it to loop through and gather the info for all dbs on server. maybe need cursor not sure. for reporting reasons i would like to include the name of the server and name of database.
i have database which has 25 tables. all tables have productid column. i need to find total records for product id = 20003 from all the tables in database.
I have databases that have zero keys defined, no foreign key constraints, no unique value constraints. I cannot assume that an Identity column is the Id. The end game is be able to output that [Table 1].[TableTwoId] = [Table 2].[Id] but cannot assume that all linkage in the database will be as simple as saying "if the field name contains a table name + Id then it is the Id in that table."
Currently have written a script that will cycle through all columns and start identifying keys in singular tables based on the distinctness of the values in each column (i.e. Column1 is 99% distinct so is the unique value). It will also combine columns if the linkage is the combination of one or more columns from Table 1 = the same number of columns in Table 2. This takes a long time, and it somewhat unreliable as IDENTITY columns may seem to relate to one another when they don't.
I am using the following select statement to get the row count from SQL linked server table.
SELECT Count(*) FROM OPENQUERY (CMSPROD, 'Select * From MHDLIB.MHSERV0P')
MHDLIB is the library name in IBM DB2 database. The above query gives me only the row count of table MHSERV0P. However, I need to get the names, rowcounts, and sizes of all tables that exist in MHDLIB librray. Is it possible at all?
I have created a view thats pulling data from two different tables to combine them into one report.
table 1 lists the client code and table 2 lists the client partner and they're linked by a variable.
When running the report the result shows the client codes with their respective partner however any client codes that didn't have a partner are not displaying in the report and I need all client codes to be displayed even if there's no partner.
Is there a way I can make this display all results and if the client partner doesn't exist for it to still display as 'Null' for the partner but still display the client code?
Script:
SELECT TOP (100) PERCENT C.cltCode AS ClientCode, C.cltSortName AS SortName, C.cltTerminationDate AS [Term date], dbo.vcltAttrib6.ainTVal AS Department, C.objInstID AS ClientID FROM dbo.cdbClient AS C INNER JOIN dbo.vcltAttrib6 ON C.objInstID = dbo.vcltAttrib6.ainObjectInstID GROUP BY C.cltSortName, C.cltTerminationDate, dbo.vcltAttrib6.ainTVal, C.objInstID, C.cltCode ORDER BY ClientID
I want to select all the records, and them them be in alphabetical order first by lastname, then by firstname, then by address. HOWEVER, and this is the tricky part, I want to group names together that have the same address. So, in this example, I want the results to be in this order:
HallC6309 N Olive HallP6309 N Olive <---- grouped with the C record because they have the same address HallE5488 W Catalina <---- back to alphabetical by first name HallJ7222 N Cocopas
I am getting a tab character at the end of my query fields, been trying various things to fix, such as using the replace function below but still i get the tabs!
select CAST(REPLACE(NAMEALIAS,CHAR(9),'')AS CHAR(40)) + ',' as PRODNAME, CAST(REPLACE(ISNULL(GLOBALTRADEITEMNUMBER,0),CHAR(9),'')AS CHAR(18))as EANNO,LTRIM(cast(ISNULL(GLOBALTRADEITEMNUMBER,0) as char(18))) as KONSEAN ,LTRIM(CAST(I.ITEMID AS CHAR(8))) AS PRODCODE,'00' from INVENTTABLE I LEFT JOIN INVENTITEMGROUPITEM IG ON I.ITEMID = IG.ITEMID
I have a lot of rows of hours, set up like this: 0745, 0800, 2200, 1145 and so on (varchar(5), for some reason).
These are converted into a smalldatetime like this:
CONVERT(smalldatetime, STUFF(timestarted, 3, 0, ':')) [this would give output like this - 1900-01-01 11:45:00]
This code has been in place for years...and we stick the date on later from another column.
But recently, it's started to fail for some rows, with "The conversion of a varchar data type to a smalldatetime data type resulted in an out-of-range value".
My assumption is that new data being added in is junk. If I query for these values and just list them (rather than adding a column to convert them also) that's fine, of course. I've checked all the stuffed (but not yet converted - so 11:45 rather than 1145) output to see if it ISDATE(), and it is. There are no times with hours > 23 or minutes greater than 59 either.
If I add the CONVERT in, we see the error message. But here's the oddity, if I place all of the rows into a holding table, and retry the conversion, there is no error. It's this last bit that is puzzling me. Plus I can't see any errors in the hours data that would cause a conversion problem.
I've put the whole of this into a cursor to try to trap the error rows too, but all processes fine. Why would it fail if NOT in a cursor?
I have a view I've created which displays client sortname, partner and date added which displays 7 results.
When I add another table to this view to display the Industry it then only gives me 4 results as the other 3 results have no Industry instead of giving me the 7 results and showing the Industry column as empty for the other 3.
Is there a way I can make it show all 7 results and havethe column where the industry is empty display the other results instead of not displaying any results at all for them?
Script: SELECT dbo.cdbClient.cltSortName AS ClientName, dbo.vcltAttrib4.ainTVal AS ClientPartner, dbo.vcltAttrib422.ainDVal AS [Date Added], dbo.cdbAttribInst.ainTVal AS Inudstry FROM dbo.cdbClient LEFT OUTER JOIN dbo.cdbObject ON dbo.cdbClient.cltCategoryID = dbo.cdbObject.objID LEFT OUTER JOIN
[Code] ....
In the above script the cbdAttribInst table has the Industry column I need which is 'ainTVal'...
I can find many examples of loading DBCC results into tables. They all begin with a create table statement defining the results. My question is , other than trial and error, is there a way to determine what data types will be returned. Sure you can say that first element looks like an integer, but is it really a bigint, and that text string can be varchar(max) but will char(2) work.
I'm not looking for an answer for a specific DBCC function, but rather a generic way I can determine the characteristics of any DBCC result set.
I tried
SELECT * INTO #tmp FROM OPENROWSET('SQLOLEDB', 'Server=ray;Trusted_Connection=Yes;Database=Ed_sandbox', 'Set FmtOnly OFF; DBCC loginfo WITH tableresults ')
but I got back
Msg 11527, Level 16, State 1, Procedure sp_describe_first_result_set, Line 1
The metadata could not be determined because statement 'DBCC loginfo WITH tableresults' does not support metadata discovery.
I have a table that has multiple transactions for stock items.
This table holds all records relating to items that are inducted onto the system and there movement. For each stock item i am interested in getting the drop destination, if it has one, and only when it follows the sequential order of "Inducted>OnTransport>Dropped" (this sequence isn't always the case). Also note the CreatedDate for the Inducted and OnTransport records for the valid sequences are always the same. Below is a valid sequence for a stock item so i would want to return 'Lane01' for the Destination of this occurrence of the stock item, if this item didn't have a valid drop location then destination would be blank. Also note each stock item can be inducted more than one time per-day.
I think i have managed to build the below sql but it will only do one item at a time, so would have to wrap it in a function. Is there a way of writing a set based select statement that gets all the inducted items and for the ones that do follow the "Inducted>OnTransport>Dropped" return the destination it was dropped at? I've attached scrips below:
I have written a SQL statement.There is a table called customer.It contains all customer data with customerid as PK.There is another table called logs and it contains customerid as foreign key and it contains a field to keep more than 90 days older user accounts.That field name is "Checked"
What I need get all records from these 2 tables and remove/hide more than 90 days older customers from record set.See my illustration.
I have written this code but I dont understand how to remove more than 90 days older user from result (because customer table doesnt contain a record called "Checked")
SELECT * FROM [dbo].[Customers],[dbo].[VIESLog] WHERE [dbo].[VIESLog].[Checked] < DATEADD(day, -90, GETDATE())
I have a simple update statement (see example below) that when runs, I expect to see the number of records updated in the Results tab. This information shows up in the Messages tab; however, what is displayed in the Results tab is (No column name) 40. From where the 40 is being generated. I have tried restarting SSMS 2012, restarting my computer, turning NOCOUNT on and off.
"UPDATE TableA SET Supervisor = 'A123' WHERE PersonnelNumber = 'B456'"
What I would like to end up with is a pivot table of each account, the trigger code and service codes attached to that account, and the rate for each.
I have been able to dynamically get the pivot, but I'm not joining correctly, as its returning every dynamic column, not just the columns of a trigger code. The code below will return the account and trigger code, but also every service code, regardless of which trigger code they belong to, and just show null values.
What I would like to get is just the service codes and the appropriate trigger code for each account.
SELECT @cols = STUFF((SELECT DISTINCT ',' + ServiceCode FROM TriggerTable FOR XML PATH(''), TYPE ).value('(./text())[1]', 'VARCHAR(MAX)') ,1,2,'')
I am trying to calculate the time difference between the value in the row and the min value in the table. So say the min value in the table is 2014-05-29 14:44:17.713. (This is the start time of the test.) Now say the test ends at 2014-05-29 17:10:17.010. There are many rows recorded during that start and end time, for each row created a time stamp is created. I am trying to calculate the elapsed time and have it as a row in the results.
min(timestamp) - timestamp(value in row) = elapsed time for that test where Channel = '273'
Here is the table DDL
CREATE DATABASE SpecTest; USE SpecTest GO
CREATE TABLE [dbo].[Spec1]( [Spec1ID] [int] IDENTITY(1,1) NOT NULL, [Channel] [int] NOT NULL,
I have decided to use Wufoo online forms so collect evaluation data from my clients. Each form will have different type of data.
For example
Form 1 might have: Client id, name, user id, comments
Form 2 might have: Client id, name, user id, address, comments, telephone number
Form 3 might have: Client id, name, user id, comments, date of birth, email address
Each form can have over 20+ data types which can be all different to other forms.My question is: What is the best way to store all the data into sql server without creating new columns of new data types everytime a new form is created in wufoo.
SELECT COUNT(id) as viewcount from location_views WHERE createdate>DATEADD(dd,-30,getdate()) AND objectid=357 SELECT COUNT(id)*2 as clickcount FROM extlinks WHERE createdate>DATEADD(dd,-30,getdate()) AND objectid=357
But I want to add the COUNT statements, so this is what I did:
select COUNT(vws.id)+COUNT(lnks.id)*2 AS totalcount FROM location_views vws,extlinks lnks WHERE (vws.createdate>DATEADD(dd,-30,getdate()) AND vws.objectid=357) OR (lnks.createdate>DATEADD(dd,-30,getdate()) AND lnks.objectid=357)
Turns out the query becomes immensely slow. There must be something I'm doing wrong here which results in such bad performance, but what is it?
ALTER PROCEDURE dbo.usp_Create_Fact_Job (@startDate date, @endDate date) AS /*--Debug--*/ --DECLARE @startDate date --DECLARE @endDate date
--SET @startDate = '01 APR 2014' --SET @endDate = '02 APR 2014' ; /*-- end of Debug*/ WITH CTE_one AS ( blah blah blah)
SELECT a whole bunch of fields from the joined tables and CTEs...When I run the code inside the stored procedure by Declaring and setting the start and enddates manually the code runs in 4 minutes (missing some indexes ).When I call the stored procedure with the ExEC
It never returns a results set but doesn't error out either. I have left it for 40 minutes and still no joy.The sproc is reasonably complicated; 6 CTEs to find the most recent version of records and some 2 joins to parent tables (parent and grandparent), 3 joins to child tables (child, grandchild and great grandchild) and 3 joins to lookup views each of which self references a table to filter for last version of a record.
Say you have a table that has records with numbers sort of like lottery winning numbers, say:
TableWinners num1, num2, num3, num4, num5, num6 33 52 47 23 17 28 ... more records with similar structure.
Then you have another table with chosen numbers, same structure as above, TableGuesses.
How could you do the following comparisons between TableGuesses and TableWinners:
1. Compare a single record in TableGuesses to a single record in TableWinners to get a count of the number of numbers that match (kind of a typical lottery type of thing).
2. Compare a single record in TableGuessess to ALL records in TableWinners to see which record in TableWinners is the closest match to the selected record in TableGuesses.
We can use comparison operators with strings as well. Hence, I tried to use the following query on a SQL Server 2012 instance with the sample AdventureWorks2012 database (the collation of the database and of the column is the default:
SQL_Latin1_General_CP1_CI_AS):
USE AdventureWorks2012 ; GO
--Returns 5 records SELECT pp.Name FROM Production.Product AS pp WHERE pp.Name >= N'Short' AND pp.Name <= N'Sport' ; GO
The query only returns 5 records. This despite the fact that the search is an inclusive search and the Production.Product table contains records that begin with "Sport".
Now, when I replace "Sport" with "Sporu" (just moving one character up in the alphabet to verify whether characters after the word have any impact on the search) gives me 8 records.
USE AdventureWorks2012 ; GO
--Returns 8 records SELECT pp.Name FROM Production.Product AS pp WHERE pp.Name >= N'Short' AND pp.Name <= N'Sporu' ; GO
What's going on inside of SQL Server that allows it to fetch "Short-Sleeve Classic Jersey" for the starting word "Short" but prevents it from fetching "Sport-100 Helmet" for the ending word "Sport" despite the search being an inclusive search?
I am getting inconsistent results when BULK INSERTING data from a tab-delimited text file. As part of my testing, I run the same code on the same file again and again, and I get different results every time! I get this on SQL 2005 and SQL 2012 R2.
We have an application that imports data from a spreadsheet. The sheet contains section headers with account numbers and detail rows with transactions by date:
AAAA.1234 /* (account number)*/ 1/1/2015 $150 First Transaction 1/3/2015 $24.233 Second Transaction BBBB.5678 1/1/2015 $350 Third Transaction 1/3/2015 $24.233 Fourth Transaction
My Import program saves this spreadsheet at tab-delimited text, then I use BULK INSERT to bring the data into a generic table full of varchar(255) fields. There are about 90,000 rows in each day's data; after the BULK INSERT about half of them are removed for various reasons.
Next I add a RowID column to the table with the IDENTITY (1,1) property. This gives my raw data unique row numbers.
I then run a routine that converts and copies those records into another holding table that's a copy of the final destination table. That routine parses though the data, assigning the account number in the section header to each detail row. It ends up looking like this:
AAAA.1234 1/1/2015 $150 First Purchase AAAA.1234 1/3/2015 $24.233 Second Purchase BBBB.5678 1/1/2015 $350 Third Purchase BBBB.5678 1/3/2015 $24.233 Fourth Purchase
My technique: I use a cursor to get the starting RowID for each Account Number: I then use the upper and lower RowIDs to do an INSERT into the final table. The query looks like this:
SELECT RowID, SUBSTRING(RowHeader, 6,4) + '.UBC1' AS AccountNumber FROM GenericTable WHERE RowHeader LIKE '____.____%'
Results look like this:
But every time I run the routine, I get different numbers!
Needless to say, my results are not accurate. I get inconsistent results EVERY TIME. Here is my code, with table, field and account names changed for business confidentiality.
TRUNCATE TABLE GenericImportTable; ALTER TABLE GenericImportTable DROP COLUMN RowID; BULK INSERT GenericImportTable FROM 'SERVERGeneralAppnameDataFile.2015.05.04.tab.txt' WITH (FIELDTERMINATOR = ' ', ROWTERMINATOR = '', FIRSTROW = 6) ALTER TABLE GenericImportTable ADD RowID int IDENTITY(1,1) NOT NULL SELECT RowID, SUBSTRING(RowHeader, 6,4) + '.UBC1' AS AccountNumber FROM GenericImportTable WHERE RowHeader LIKE '____.____%'
I have been given a request at work that requires us to run the SQL scripts that are held in a report configuration table and output the results as .csv or.xls files in a desired folder with a file name that is also specified within the report config table.
An example of the table is as per script below with column A being the desired file name to be created and Column B contains the script that will be executed.
Ideally I require a script that i can drop into a Stored Proc that will run down each row in the table and export and create each rows results as a separate file in a desired folder.
------------------------------------------------------------------------------------------ -----SAMPLE TABLE SCRIPT ----------------------------------------------------------- ------------------------------------------------------------------------------------------ /****** Object: Table [dbo].[Test_Table_PS] Script Date: 21/09/2015 09:14:57 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[Test_Table_PS]( [File_Name] [nvarchar](100) NULL,
I am looking for a way to show how I can have a result set that shows a record with one item and the number of records where it was purchased or sold. Below is my sample data.
Use tempdb go create table #ItemLedgerEntry ( ENTRYNO INT NOT NULL , ITEMNO VARCHAR (50) NOT NULL , POSTINGDATE DATETIME NOT NULL , ENTRYTYPE INT NOT NULL
[code]....
I know something like this will give me results but I'd like to run one query.
select itemno, count(*) from #ItemLedgerEntry where entrytype = 0 group by itemno order by count(*) desc
I am using SQL server 2012. An user tried to export tables from GIS application to the SQL database.
After export, and login to the SQL server, we see all the tables has his name as the schema but not dbo.
He was added as a login and user as in a windows group. Meaning he is a member of the windows group.
I assume when export, the default schema should be dbo. but apparently not. I went to the setting and explicitly make the default schema for the windows group user to dbo. But he tried again, it still use his username as schema prefix to the tables. just wondering why is this?