Hello and thanks in advance to any and all help on this post!
I am trying to create a report that uses a simple select against one table in the database:
select a,b,c
from MyTable
where d = 1
order by a
For ease of explanation, this returns 75 records. This report is to be used as a one page Flyer. Now I can create a single table and format it but I end up with three pages instead of one. My thought was to split the data returned between two side by side tables on this report. I cannot seem to find a way to do this through the properties of each table or an example of any expression that could help with this outside of RowCount (which simply does a page break), nested SELECTs to emulate LIMIT as in MySQL, or SELECT TOP n ORDER BY ASC / DESC to get a TOP N or BOTTOM N from a SQL Query.
I know I can't be the only one to have ever thought of this as a solution, I hope not at least , so I was hoping someone here may be able to help out. Thanks again in advance!
This query was working well because I used to only be interested in one counter that was returned in the column, which was 'Free Megabytes'...I now have additional data that shows up as 'Total Disk Space'...Ideally, the query would return the total disk space next to the free megabytes on the same row for the same disk drive. Here is a couple rows of sample output:
AverageValueInstanceNameObjectNameCounterName 44549 C: LogicalDiskFree Megabytes 44548 C: LogicalDiskFree Megabytes 69452 C: LogicalDiskTotal Disk Space 69452 C: LogicalDiskTotal Disk Space
This is the ideal format, the average value column goes away:
InstanceNameObjectNameFree MegabytesTotal Disk SpaceC: LogicalDisk44549 69452
One table is like this CREATE TABLE p1 (SNo VARCHAR(30)) And Data in the Table p1 is:
Sno
1,0
12,0
1,20
100,21
1001,21
There is One more Table p2 CREATE TABLE p2 (TravelerID INT, GDId INT)
Now my Requirement is Left side part of SNo(Before comma) Whatever the data is there like 1,12,1,100,1001 will be pushed into p2 table of TravelerID Column.And Right side part (Data like 0,0,0,21,21) will be pushed into p2 table of GDId Column.Ultimately Table should look like below format.
PremID is just the PrimaryKey. Then there's the year, and then a load of float fields, each representing a premium paid for each month of the year (01=Jan, 02=Feb etc).
Now what I am looking to achieve is to create separate rows for Table1 from this data.
Starting in January (Prem01), I wish to record a separate line each time the premium amount changes. The best way of exlaining this is to give an example based on the above data.
For the first row (2013), the premium at the start of the year (Prem01) is 100.00. This premium remains until it changes in Prem10 to 130.00. It then remains 130.00 for the rest of the year (Prem10 to Prem12). For this I would want to create 2 rows of data for Table1. The result should look like below.
all the columns are separated with a "|" but the amount of columns are not fixed, so in lines 1 & 2 they are 4 columns and in line 3 there is 7 columns
I have a large poorly designed table (inherited) With a Name field that contains comma delimited text containing address information. I need to do several things with it but unfortunately there doesn't appear to be any true consistency in it. When it displays in its own text box it works by placing each section on a new Line and looks ok.But I need to pull it apart and place things like unit number, Building Name in its own column etc. In the data it could be in either the 2nd,3rd, 4th, dependent on what came 1st. the data looks some thing like the following
unitNumber/StreetNumber Space StreetName (Building Name), Subub,City,Country
Some addresses won't have unit number or Suburb or country so when splitting you could have Suburbs and Citys in multiple columns even if you try and stagger the split process.Has any body go a good tool or reference site for dealing for this sort of problem. I have a table that I have made up that has some of the street names that could be used for comparing against existing records but it is by no means fool proof due to spelling inconsistencies . I also have another list of Common building names that could be used to compare, remove and place in the new building column.
What I need is split the data into two columns if data in column Main starts with 'PR-' then output result to column P and if it starts with 'CC-' then to column C (the output needs to be in one table).
I receive a file that will have hyphens between data items such as 123-aed-edr-45r-ui9 1-ed3-45r-rrr-98u I need to split the values to load into a table that will hold the 5 separate data itens. The fields will always have the hyphens but could be different lengths. Any idea on a best approach to split this in tsql
Hi to everyone,My problem is, that I'm not so quite sure, which way should I go.The user is inputing by second part application a long string (let'ssay 128 characters), which are separated by semiclon.Example:A20;BU;AC40;MA50;E;E;IC;GREENNow: each from this position, is already defined in any other table, asa separate record. These are the keys lets say. It means, a have someproperities for A20, BU, aso.Because this long inputed string, is a property of device (whih alsohas a lot of different properities) I could do two different ways ofstoring data:1. By writing, in SP, just encapsulate each of the position separatedby semicolon, and write into a different table with index of device,and the position in long stirng nearly in this way:Major device data tableID AnyData1 AnyData2 ... AnyData3123 MZD12 XX77 .... any comment text124 MZD13 XY55 ... any other commentString data Tablefk_deviceId position value123 1 A20123 2 BU123 3 AC40.....123 8 GREENThe device table, contains also a pointer (position), which mightchange, to "hglight" specified position.Then, I can very easly find all necessary data. The problem is, I needto move the device record data (from other table) very often into otherhistory table (by each update). That will mean, that I also need tomove all these records from 1 -8 for example to a separate historytable, holding the index for a history device dataset. This is a littleinconvinience in this, and in my opinion, it will use to much storagedata, and by programming, I need always to shift this properities intohistory table, whith indexes to a history table of other properities.2. Table will be build nearly in this way:Major device data tableID AnyData1 AnyData2 ... AnyData3 stringProperty pointer123 MZD12 XX77 .... any comment text A20;BU;AC40;MA50;E;E;IC;GREEN 3124 MZD13 XY55 ... any other comment A20;BU;AC40;MA50;E;E;IC;GREEN 2By writng into device table, there will be just a additional field forthis string, and I will have a function, which according to specifiedpointer, will get me the string part on the fly, while I need it.This will not require the other table, and will reduce the amout ofdata, not a lot ... but always.This solution, has a inconvinance, that it will be not so fast doing asearch over the part of this strings, while there will be no real indexon this.If I woould like to search all devices, by which the curent pointervalue is equal GREEN, then I need to use function for getting thevalue, and this one will be not indexed, means, by a lot amount ofdata, might be slow.I would like to know Your opinion about booth solutions.Also, if you might point me the other problems with any of thissolution, I might not have noticed.With Best RegardsMatik
I have a report I'm designing where, as a simple SQL report viewed only on a screen, it was irrelevant how wide it was. However, now I've been asked to duplicate this report in SSRS and to include the option to print it out.
Well, the problem is, as it stands - with 8pt font, even - it will require a sheet of paper about 24" wide to get all of a single row to print.
So, I'm trying to create a Tablix that will split the data into two sets of header/detail rows in the same Tablix. Any workable solution that doesn't involve writing an app in basic or C.
We have a 5 TB database in our environment. Both MDF & LDF are location in 1 single drive which is of 10 TB.
Now, we want to move to new server but we have multiple drives each of max 1 TB per drive. How can I go about splitting the data from 1 MDF files into multiple data files? How about moving indexes ?
SQL Version : Microsoft SQL Server 2012 (SP1) - 11.0.3513.0 (X64) - Enterprise Edition (64-bit)
Currently I have a column with multiple postcodes in one value which are split with the “/” character along with the corresponding location data. What I need to do is split these postcode values into separate rows while keeping their corresponding location data.
For example PostCode Latitude Longitude 66000/66100 42.696595 2.899370 20251/20270 42.196471 9.404951
Would become PostCode Latitude Longitude 66000 42.696595 2.899370 66100 42.696595 2.899370 20251 42.196471 9.404951 20270 42.196471 9.404951
Currently I have a column with multiple postcodes in one value which are split with the “/” character along with the corresponding location data. What I need to do is split these postcode values into separate rows while keeping their corresponding location data.
If you see below there are 2 customer names on 1 loan, most of them share the same lastname and address, I want to separate it with fields,LoanID, customer 1 Firstname, Customer 1 Lastname, Customer 2 FirstName, Customer 2 Lastname, Adddress,zip
LEFT JOIN Status As S on S.LoanID = L.LoanID LEFT JOIN Borrower B on B.LoanID = L.LoanID LEFT JOIN MailingAddress MA on MA.LoanID = L.LoanID where S.PrimStat = '1' and B.Deceased = '0'
I need to copy data from warehouse tables to master tables of different SQL instances. Refresh need to done once in an hour. What is the best way to do this? SQL agent jobs or SSIS packages?
Hi all, I have a large Excel file with one large table which contains data, i've built a SQL Server DataBase and i want to fill it with the data from the excel file.
I have a sql2005 db. I'm trying to perform a query that takes a list of values in a parm, breaks them up and uses them in an IN clause as follows: <code> ALTER PROCEDURE [dbo].[SelectPartialOrderListByDate] @StoreID varchar(255)ASBEGIN SET NOCOUNT ON; SELECT DISTINCT O.OrderID, M.InitialSalesCode, M.DateOccurred, M.ErrorCode FROM dbo.OrderInfo O WITH (NOLOCK) INNER JOIN dbo.Message M WITH (NOLOCK) ON O.OrderID = M.OrderID WHERE O.StoreID IN (@StoreID)END </code> The values coming into the procedure are obviously in a varchar string format. Notice the boldfacing above. The values within this IN clause should be something like this after the conversion: IN (87,108). How can I accomplish this? I've seen examples of creating a UDF and returning a table, but am not sure how to apply that to my situation via a code sample. Could someone help me out?
Hi Everyone,I've been given the painstaking project of splitting a single column into multiple columns and rows. I have a solution set up in which I will be posting further down the post but I want to see if there is a much more efficient solution to this.sample data:create table tbl_list(pk_int_itmid int(5) Primary Key,vchar_desk vchar(300));create table tbl_test1(fk_int_itmid int(5) references tbl_list(pk_int_itmid),vchar_itm varchar(60));insert into tbl_list values(1, 'this item');insert into tbl_list values(2, 'that item');insert into tbl_list values(3, 'those items');insert into tbl_test1 values(1, 'A, B - C, D, E - F, G, H - I');insert into tbl_test1 values(2, 'J, K - L, M, N - O');insert into tbl_test1 values(3, 'P, Q - R');into this table:create table tbl_output(fk_int_itmid int(5) references tbl_list(pk_int_itmid),vchar_itmA varchar(60),vchar_itmB varchar(60),vchar_itmC varchar(60));Output in comma delimited form:'1', 'A', 'B', 'C''1', 'D', 'E', 'F''1', 'G', 'H', 'I''2', 'J', 'K', 'L''2', 'M', 'N', 'O''3', 'P', 'Q', 'R'my current solution:create view vw_itm_a as select fk_int_itmid, substring(vchar_itm, 0, charindex('-',vchar_itm)) as vchar_itmA,substring(vchar_itm, charindex('-',vchar_itm)+1 , charindex(',',vchar_itm)-charindex('-',vchar_itm)) as vchar_itmB,substring(vchar_itm, charindex(',',vchar_itm)+1) as vchar_itmCfrom tbl_test1where charindex(',',vchar_itm) >1Gocreate view vw_itm_b as select fk_int_itmid, substring(vchar_itm, 0, charindex('-',vchar_itm)) as vchar_itmA,substring(vchar_itm, charindex('-',vchar_itm)+1 , charindex(',',vchar_itm)-charindex('-',vchar_itm)) as vchar_itmB,substring(vchar_itm, charindex(',',vchar_itm)+1) as vchar_itmCfrom vw_itm_awhere charindex(',',vchar_itmC) >1;Gocreate view vw_itm_c as select fk_int_itmid, substring(vchar_itmC, 0, charindex('-',vchar_itmC)) as vchar_itmA,substring(vchar_itmC, charindex('-',vchar_itmC)+1 , charindex(',',vchar_itmC)-charindex('-',vchar_itmC)) as vchar_itmB,substring(vchar_itmC, charindex(',',vchar_itmC)+1) as vchar_itmCfrom vw_itm_bwhere charindex(',',vchar_itmC) >1;Go;create view vw_itm_d asselect fk_int_itmid, vchar_itmA, vchar_itmB,substring(substring(vchar_itm, charindex(',',vchar_itm)+1), 0, charindex(',',vchar_itm)) as vchar_itmCfrom vw_itm_a ia union vw_itm_b ib on ia.fk_int_itmid = ib.fk_int_itmidGo;create view vw_itm_e asselect fk_int_itmid, vchar_itmA, vchar_itmB,substring(substring(vchar_itm, charindex(',',vchar_itm)+1), 0, charindex(',',vchar_itm)) as vchar_itmCfrom vw_itm_c ia union vw_itm_b ib on ia.fk_int_itmid = ib.fk_int_itmidGo;create view vw_itm asselect fk_int_itmid, vchar_itmA, vchar_itmC, vchar_itmC from vw_itm_awhere fk_int_itmid not in (select fk_int_itmid from vw_itm_b)unionselect fk_int_itmid, vchar_itmA, vchar_itmC, vchar_itmC from vw_itm_dunionselect fk_int_itmid, vchar_itmA, vchar_itmC, vchar_itmC from vw_itm_bwhere fk_int_itmid not in (select fk_int_itmid from vw_itm_c)unionselect fk_int_itmid, vchar_itmA, vchar_itmC, vchar_itmC from vw_itm_eunionselect fk_int_itmid, vchar_itmA, vchar_itmC, vchar_itmC from vw_itm_cGo;select fk_int_itmid, vchar_itmA, vchar_itmC, vchar_itmC into tbl_outputfrom vw_itmIs there a much more efficient manner of handling this column splitting?ThanksDC
I founded couple thread about splitting but not find same situation what i have. I have server with Raid10 with big storage massive, and Test server with two 30Gb HardDrives (not into a Raid or Somethink). I have situation when in real server database grown over 30Gb and now i can not restore database copy into a test server because i have one big Data File.
Can i split somehow 35Gb Data file when i restore to test server to 25Gb and 10Gb ???
I have a table that contains agrrements and contracts with dates. Now I need to calculate some things and I'd like the rows to only have one month per row.
I have rows like:
Agreement, Start, End ID001, 2004-01-01, 2004-04-30
If I could get these single rows that contains 4 months into a temptable like this:
I everyone, I have been on the admin side of IT for the past 20 years and recently started to do some scripting (VBscript) and a little SQL.
I have developed a solution to meet the needs of some federal auditors, but not really met my needs yet. What I have done is this.
I use MS Logparser to go out to 64 servers and copy the event logs into a DB on a SQL 2000 Ent. Server.
On the SQL server I have one StoredProcedure that parses out information from the security event log DB and put that info into a temp DB.
set ANSI_NULLS ON set QUOTED_IDENTIFIER ON go
ALTER PROCEDURE [dbo].[stp_SecurityAuditReport]
AS
TRUNCATE TABLE SecurityEvents_Tmp
-- Parse Bank Number & UserName INSERT INTO SecurityEvents_Tmp(DepartmentNumber, UserName, EventLog, RecordNumber, TimeGenerated, TimeWritten, EventID, EventType, EventTypeName, EventCategory, EventCategoryName, SourceName, Strings, ComputerName, SID, Message, Data) SELECT DepartmentNumber = '001', UserName = CASE WHEN Strings LIKE '[0-9][0-9][0-9]%' THEN SUBSTRING(Strings,1,charindex('|',Strings,1)-1) WHEN Strings LIKE '-|[0-9][0-9][0-9]%' THEN SUBSTRING(Strings,3,charindex('|',Strings,3)-3) WHEN Strings LIKE '-|[a-z]%' THEN SUBSTRING(Strings,3,charindex('|',Strings,3)-3) WHEN Strings LIKE 'Account Unlocked. |%' THEN SUBSTRING(Strings,21,charindex('|',Strings,21)-21) ELSE SUBSTRING(Strings,1,charindex('|',Strings,1)-1) END, Events.* FROM Events JOIN EventsToLog on Events.EventID = EventsToLog.EventID WHERE SID NOT LIKE 'S-%'
-- Update blank usernames UPDATE SecurityEvents_Tmp SET UserName = 'NO USERNAME' WHERE UserName = '' OR UserName = '-'
-- Update DepartmentNumbers with zeros UPDATE SecurityEvents_Tmp SET DepartmentNumber = CASE WHEN UserName LIKE '[0-9][0-9][0-9][a-z]%' OR UserName LIKE '[0-9][0-9][0-9]#%' OR UserName LIKE '[0-9][0-9][0-9]$%' THEN SUBSTRING(UserName,1,3) ELSE '001' END
As you can see, we use 3 didgit numeric prefixes on all Departmental employee accounts. This is later used to produce departmenntal user audit reports.
I then have this script in a DTS that exports the report to an excel spreadsheet. (All works well for this purpose!)
DECLARE @TimeGenerated datetime SELECT @TimeGenerated = TimeGenerated FROM SecurityEvents_TimeGenerated
DECLARE @TimeGeneratedEnd datetime SELECT @TimeGeneratedEnd = TimeGeneratedEnd FROM SecurityEvents_TimeGenerated
SELECT DepartmentName = CASE WHEN b.DepartmentName IS NULL THEN 'All Department' ELSE b.DepartmentName END, a.EventID,d.EventDescription,a.UserName, a.TimeGenerated,c.Email1,c.Email2,c.Email3,c.Email4 FROM SecurityEvents_Tmp a LEFT JOIN DepartmentList b on a.DepartmentNumber = b.DepartmentNumber LEFT JOIN EmailToList c on b.DepartmentNumber = c.DepartmentNumber JOIN EventsToLog d on a.EventID = d.EventID WHERE b.Departmentnumber in (select Departmentnumber from Departmentlist) AND a.TimeGenerated BETWEEN @TimeGenerated AND @TimeGeneratedEnd ORDER BY b.DepartmentNumber,a.EventID,a.TimeGenerated
This combination of utils and scripts does very good for producing generic security reports for branch officers.
But now I am getting requests to justify/explain what is in these reports. The problem I have is that the information needed to delve further into the event logs is in a field called Strings. This field not only changes in length and the amount of fields within this string, but the information in this field changes depending on the type of event record it came from.
This is the Strings field from a failed logon (529) 200jenil|DOMAIN|10|User32 |Negotiate|SERVER|SERVER$|DOMAIN|(0x0,0x3E7)|6920|-|10.190.12.10|48397
And this is from Event ID 642 which was an account being created. -|381$cmiller|DOMAIN|%{S-1-5-21-3554868564-134719009-1577582102-7972}|Jmotta|DOMAIN|(0x0,0x58F635E)|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|%%1792|-|-
Now, my script does a good job of getting the first user name out but as in the 642 event the second users name would be useful as well. This is the person that created/modified the user account.
So what I was hoping was that I could use a function (or whatever) to automaticaly split the Strings value into it's individual components and put them into an auto-sizing temp table as something like Field1, Field2, Field3, and so on until the end of string.
I could then use a case to get the information needed.
I have a large db (sql 2005 express) that is, well, basically a mess. It is not normalized properly and contains masses amount of data (due to so much repeating data). To make a long story short - this db needs to be redesigned, but management said no, so that is not an option - so please, no one suggest that that's what I do.
My application creates reports based on this db - the problem is, the sp's are slow - and when a report needs to run several reports, it takes a long time to run. The sp's and db have been optimized as best I can (adding indexes etc.,)
I was wondering if there is a way to split the db - what I want to do is just retain, say 2 years of data in 1 db, and store the rest of the data in the other db, as 2 years worth of data is 95% of what will be queried. I did copy over 2 years worth for testing, and reports that took 30 minutes in the existing db, take less than 1 minute (sometimes even faster) in the new db - a huge improvement.
My problem is how to deal with the times that I need more than 2 years worth of data - how do I query both db's to get my application to read the data from both db's so that it seems that I am only running one db? The new db would be updated daily with new data, but not the old db - so if I had to query 10 years worth of data, I need 2 years from the first db, and then the 8 yeas from the second db.
If anyone can provide some feedback or point me in the right direction of what I should research in order to accomplish the above - I would appreciate it.
If anyone knows of a better solution - please don't be shy - speak up! :)
hiI am new to SQL.I have database of 30GB.I have just heard aboutspliting Databases which helps in Performance.So please can any oneguide me in what are the steps involved in it.Anxious to know How does it work if i split my database in twodifferent location.We are using SQl 2000.Operating System - Windows 2000 serverregardsTV
Hello,I have been placed in charge of migrating an old access based databaseover to sql server 7.0. So far, I have imported all the tables intosql server, but now I have come across the issue of needing to split astring variable. For instance, in the old database, the variable forname was such that it included both first and last names, whereas inthe new database there are seperate entities for first and last name.I know that there is a way to write a script that will separate out thetwo strings by using the "space" in between the name, but I'munfamiliar how to do this. Any suggestions? Thanks!Rick
I have a field that contains codes likefhj#asdskjjljlj#12And so on.What I want to do is create two new fields (field1 and filed2) thatsplit the original filed at '#'If a field does not contain '#' I would like its entire contents to besaved in field1.Also how do I ensure that I save these changes?Thanks fo any help in advance.Regards,Ciarn
This may be a stupid question but I'll throw it out here, is it possible to use sql 2005 to split up pdf files into individual files by a field on the form or an index?