Imagine a table in Microsoft Sql Server 2000 named Pictures with three
fields. A primary key ID(int), a Name(nvarchar) and a Picture(image)
field.
Lets put some records into the table. The first two fields as expected
would take an int and a string. The third field is of type image.
Instead of putting a bitmap in this field lets place an xml document
that has been streamed into a byte array. The xml document would
describe the Picture using say lines. If the picture was that of a
square we would have four lines in the xml document. You could think of
the xml document as something similar to vector graphics but the
details are not relevant. The important fact is that the contents of
the image field is NOT a bitmap but a binary stream of an xml document.
Now imagine we have a reporting tool like Crystal Reports that can be
used to report on this database table. Imagine we create a report by
using the three fields mentioned above. As far as Crystal is concerned
the first field is an int, the second a string and the third an image.
If our table had ten records and we preview the report we like to see
10 entries each consisting of an ID, Name and a Picture.
This can only happen if the image field contains a Bitmap, but as
mentioned above the field contains an xml document.
Now my question...
Can we write something in SQL Server 2000 (not 2005) to sit between the
table and Crystal Reports so to convert the XML document to a bitmap.
The restriction is that we cannot use anything but sql server itself.
The client in the above case has been Crystal Reports but it could be
anything.
I know SQL Server 2005 supports C# with access to the .NET framework
within the database. Unfortunately, I am not using SQL server 2005.
Some people have suggested the use of User Defined Functions and TSQL.
I like to know from the more experienced SQL Server people if what I am
trying to achieve is possible. Maybe it has not been done but is it
possible?
Below is the statement given in MicroSoft SQL Server migration documentation :
"When a view is defined with an outer join and is queried with a qualification on a column from the inner table of the outer join, the results from SQL Server and Oracle can differ. In most cases, Oracle views are easily translated into SQL Server views"
Please can anyone explain the above with some examples. I don't find such a 'CREATE VIEW' statement existsing in two DBs, which results in different result set. Am I wrong ?
Another tech was having the same problem with his sp which he also made into a macro. He was having the same problem as I was, he thought since he made the user the dbo owner that he woud have no problems...but since hes the one who created the db and the stored procedures when the user tried to access it he got that same error message. It went away after he went into the Enterprise manager and in the properties of the stored procedure under permission he gave the user EXEC permissions...And WAAALLLAAA problem solved :):):):)
Anyone else experience this? A developer just finished complaining about the performance of one of our databases. Well, he sent me the query and I couldn't understand why it was such a dog. Anyways I rewrote it. The execution plan is totally different between the two. I had no idea specifying the join made such a difference. First sql executed in 7 minutes that 2nd took 1 second. SELECT dbo.contract_co.producer_num_id, contract_co_statusFROM dbo.contract_co, dbo.v_contract_co_statusWHERE ( dbo.v_contract_co_status.contract_co_id = dbo.contract_co.contract_co_id ) AND contract_co_status = 'Pending' OR ( contract_co_status = 'Active' and effective_date > '1/1/2004' )SELECT dbo.contract_co.producer_num_id, contract_co_statusFROM dbo.contract_co INNER JOIN dbo.v_contract_co_status ON dbo.contract_co.contract_co_id = dbo.v_contract_co_status.contract_co_idWHERE contract_co_status = 'Pending' OR ( contract_co_status = 'Active' and effective_date > '1/1/2004' )
I have a system in place currently that both a local and a live database, currently the sales database is kept in line with a home grown "syncronise" script that controls keeping the two tables (live and local) in sync with the use of Local and Remote ID'sFor example;Table "Sales"ID - IntRemoteID - IntVarious Data fieldsThe 'Live' System writes new records using the RemoteID field, whilst the local inserts new records using the ID field. The Sync script copies the data up/down and assigns the corresponding RemoteID/ID's to the records as they are copied accordingly.Its been decided that this process must change, in favor of Two Way Replication. I have been tasked with finding the solution to the safe handling of ID's as theres a danger that two people can be writing a record (one live, one local) and possibly create the same ID's - i know that replication has its own ID's and could probably manage this well on its own, but these 'Internal' ID's have to be unique due to business rules.So i have been considering two possibilities to address this problem,1. Staggering the ID's - for instance Live ID's begin at 1,000,001 and local ID's begin at 1.2. Having a controlling ID table which provides the next IDBoth options have their advantages and disadvantages, with option 1 we will run into a problem later when the local ID's reach the 1 million mark (a few years away, i admit) and option 2 will need careful locking/transactional code to ensure the same ID isnt given twice.Does anyone have experience with this kind of issue? or could suggest an alternative approach to this problem? Thanks in advance!
We've had an interesting server issue here since about the start of the year, where the dual processor Netfinity server we've been running SQL 7 on has been holding steady at almost exactly 50% CPU utilization, give or take about 5%-10%. Disk utilization in all areas is relatively low. In fact, the server will do this right after a fresh reboot, with no users accessing. The really interesting part is that a graph of the two processors mirror each other. When the utilization of one goes up, the other goes down, almost perfectly. The two processors are the same speed, but their firmware build is different. IBM says it's not a problem. What say ye?
Hi ! I am facing an interesting problem for one of my databases. When I double click on a particular table in Enterprise Manager, the Manage Tables window shows me only one or two fields from the table. (it does not show me all the fields). When I run sp_help <tablename> in isql, all the information is displayed. Why is there such inconsistency. It happens only on one particular database. I have run DBCC commands but still no help. Does anyone have any ideas on this. Thankx,
We're having to work with some legacy data. The tables in the so-called database seem to have way more nulls than actual data. One table appears to have around 100 or more columns in it. It has close to 40,000 rows.
This Db has pretty much 0 normalization present.
IOW, your worst nightmare.
Is there a way we could run a query that would return the total number cells inthat contain NULL and another that could return the total number of data-bearing cells so we could come up with a % or a ratio.
In this SQL query I get en error like this:Incorrect syntax near ','.SELECT SUM(mCount * mPrice), CONVERT(DATEPART(yyyy, mDate), varchar) + '.' + CONVERT(DATEPART(mm, mDate), varchar) AS DateSoldFROM salesWHERE mDate BETWEEN '9/10/2007 0:0:0' AND '1/10/2008 23:59:59'I think you it isnt necessary to show values in the table because it is a syntax error but I dont undertand how it can be incorrect...Thanks for help...Saren Taşçıyan
When I wanted to see ONLY today€™s records on the sql server, it took 5 minutes. When I wanted to see records including today€™s records on the sql server, no problem. (19 Sep. is a past date which is working properly) It is very interesting it takes 5 minutes to sort the lastday there is nothing wrong with the other days. I ran DBCC INDEXDEFRAG method but no change anything.
I am wondering about something, everyday at 22:00 pm we are getting backup. Is this process creating a difference between before getting the back up and after got the backups? It is really interesting issue. I do not know what I can say. It seems there is an issue with today€™s records. May be there are some locks on the today€™s records. I do not know. Why does not cause any issue for old dates but for current day ?
SELECT DISTINCT CallIDChar, ANI, DNIS, CallerName, AgentName, CallQueue, AgentID, CallType, CallStartTimeStamp, CallEndTimeStamp, CallDuration FROM (SELECT MAINTBL.*, DRTTABLE.CallDuration AS CallDuration
FROM CallInfo AS MAINTBL
JOIN ( SELECT CallID, DateDiff(second, min(CallStartTimeStamp), max(CallEndTimeStamp)) as CallDuration
FROM CallInfo
GROUP BY CallID ) AS DRTTABLE ON MAINTBL.CallID=DRTTABLE.CallID ) as CallInfo
WHERE CallIDChar = CallIDChar
AND (CallStartTimeStamp >= '2007-09-20T00:00:00')
AND (CallStartTimeStamp <= '2007-09-20T23:59:59')
What can be reason for this issue and how can I solve it ?
I have a very simple SSIS package that imports data from ODBC source. There are two "Time" columns in source, but When this table is created in SQL2005, it give me "bigint"!!!
How can this happen?
I tried to import data to SQL2000, those two columns are created as "smalldatetime" properply.
I am making a report in Visual Studio..Now my question is,is there anyway that i can add a tab control for my rdl??Hope to here from anyone out there with a know how..Thanks guys!!
Having no more experience than reading books online, here is an interesting idea I would like to run by you guys and you can let me know if it is feasible or tell me I need to put the crack pipe down...
We are going to increase the number of disks in our SAN, and I was speaking with the SAN administrator and he mentioned the shuffling of logical drives to match the new space. He said he is going to have to go through quite a few combinations/permutations on figuring out the best configuration for what data goes on the old vs. new to get the optimal space.
Is this something that can be modeled out? I can write something that recursively figures it out, but why not explore fun ideas with tools that may be able to do it?
Hello, In the code SqlParameter dtDate = new SqlParameter("@date",SqlDbType.SmallDateTime,4); dtDate.Value=Convert.ToDateTime(Session["date"].ToString()); cmd.Parameters.Add(dtDate);where the @date parameter is defined as a SMALLDATETIME inside a stored procedure, I cannot use a function such as "Convert.ToSmallDateTime" and it seems a conversion from DateTime to SmallDateTime takes place anywhere within the code. Eventually it works but I wonder if everything's ok with this code. Is there anything wrong about it? Thanks
I have an interesting request. I would like to take a table such as:
date value ----- ----- 1/1/01 10.5 1/2/01 9.3 1/3/01 11.5 1/4/01 4.7 1/5/01 3.2
and calculate an average for each record, based on its own value AND the following two values. In other words, for the first record, I would take sum(10.5 + 9.3 + 11.5)/3 and put this into a third column in my table. I would then do the same for the next record: sum(9.3 + 11.5 + 4.7)/3. Thanks in advance!
Run the following: declare @s1 varchar(10), @s2 varchar(10) declare @t table( recordid int identity(1,1)not null,field char(1)not null) insert @t (field) select 'a' union select 'b'select @s1 = '', @s2 = ''select @s1 = @s1 + field +',' from @t order by 1select @s2 = @s2 + field +','from @t order by recordidselect[Weird]=substring(@s1, 1,(datalength(@s1)-1)),[Not]=substring(@s2, 1,(datalength(@s2)-1))
The value in the table of one DB is 17869 sq. ft. Now to insert this value in a new table of other database the reporting basis is 1000 i.e I need to do 17869/1000 = 17.8 so I have to take the value as 18. Another thing to be kept in mind is the value in the new table should have leading Zeroes. If the value is 18 it should be displayed as 0000018 ( data type in new table is Varchar(7) and in old table char (9) ). What can be the best way to implement this??
All- I would greatly appreciate some assistance in creating a SELECT statement for table headcount in the following problem: First, find the schema here: http://home.pacbell.net/eulimi/schema_headcount.GIF The problem: Given a person_id (a record in the person table), list all headcount records WHERE:1) headcount.person_id = person.person_id AND2) the headcount.act_session_id belongs to a activity (from the activity_session table) that the person is allowed to attend by virture of the person's membership in a subexpedition. Note that: - A person can be a member of one or more circles (via the circle_person table)- Each circle belongs to exactly one subexpedition.- A persons who is a member of a subexpedition (via one or more circles) is entitled to sign up for one or more activities (activity_sessions) associated with the subexpedition. So, put yet another way, the SELECT command should show, for a given person_id, the list of records in headcount that includes the activities (act_session_id) that the person is entitled to join by virtue of the persons membership in the subexpedition via his circle. (Whew!) I hope this is enough information. I think the problem would be fun if I knew what I was doing. (SQL newbie here.) I suspect I need several INNER JOINs and a compound WHERE clause. Thanks in advance!!!
I have been tasked with an interesting problem using SQL 2005. Simplified it looks like this: I have two tables both well over a million rows, totalling unrelated keys between the tables. I need to find records in table one that do not exist in table two, the twist is this, I need to find rows where the invoice number in table 1 does not exist at all in table 2 or where the invoice number exists but the timestamp associated with the invoice in table 2 is more than 10secs different than the invoice/timestamp in table 1. I'm fairly new to sql and, although I can do the really simple stuff, this one is over my head I'm afraid. Any help at all would be greatly appreciated. I would also like to hear any suggestions regarding books that might give examples of sql code at around this level of difficulty. Thanks in advance for your assistance! Paul
I've written this procedure that is a meta data manager if you will. It looks at a vocabulary table and will create or alter tables to be in line with the meta data. It works great and has for a couple of years now. However, I just rewrote it in an effort to make it more compact, efficient, etc. Now, our system uses at least three databases. One contains shared data, the next holds only system data for the app like individual user settings, etc (atslogin is the name) then would be an actual database for client data or accounts as we call them. Clients could have one or 100 of those. Please, no comments about moving everything in to one DB as that is not always possible to to hierarchy of multiple business models.
Now, on to the issue. This procedure used to exists in the local database (one or many) now I keep it in atslogin so in effect, it looks "down" on all other databases and does it's thing. It adds columns, expands them, creates tables, views, keys blah blah blah.
Here is the issue. When it creates a new column the ordinal position is out of whack. I've got a table where I keep dropping the last column, run my procedure to add it and find that the ordinal position has increased by one each time. Add it and the value is 48 for example. Drop it and add it again and it is now 49 and all the while there is a gap between say 47 and 49. This is being written to syscolumns and the view INFORMATION_SCHEMA.[columns].
This is a big deal because if I find columns are out of order I wont attempt to alter the table. Trouble is the columns are in the proper order, I just can buy what the system is saying. Anyone ever seen this? Even if I add a column as the db owner I'm seeing this. This is SQL 2k. Below is the exact version
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Microsoft SQL Server 2000 - 8.00.760 (Intel X86) Dec 17 2002 14:22:05 Copyright (c) 1988-2003 Microsoft Corporation Developer Edition on Windows NT 5.1 (Build 2600: Service Pack 2)
Ok,I work for an organization which receives numerous contracts fromnumerous clients. Now, with each new contract comes new flatfilesneeding importing (with differing formats and fields, etc...), faultydata, etc... It's not as easy as just telling all the contractors "ok,we now need all the data you send us in a generic universal format" asmany of the contractors have contracts dating back years, and they onlyprovide what their contract specifies they provide, nothing more. Also,it's not as easy as saying "get on board with this new format or thecontract is cancelled" as many of the contracts carry steepcancellation fees and/or actually breaking the relationship is just notfeasible, given the services some provide.So basically, does anyone out there have experience with constructing aflexible method of dealing with routine introduction of new dataformats, and a constant barrage of change?Any tips, links, or books you can recommend? Thanks!
We have moved from SQL 2000 to SQL 2005 for our main server, and our reporting server, which uses transactional replication.
Now, in SQL 2000 when I originally setup replication, it replicated all of the table indexes.
I have recreated the publications in SQL 2005, but they are no longer there. Do you have any idea what would cause some of our table indexes to be missing?
hi here is my situation i have 2 tables one table holds stock qty of items in several warehouses with the folowing fields Balance_Date,Warehouse,ItemCode,Qty it holds the balance for each day since i started calculating but i only use the last date for the current purpose.
the other table holds "in" and "out" transactions for items in these warehouses with dates only larger then the last day in the previus table
for example if the last date on the first table is 01/jan/2008 then the 2nd table holds only records with dates 02/jan/2008 and greater.
now comes my question
i need to make a table that has records from the first table as an opening balance for the records in the second table
so each each group of warehouse, itemcode from the first table gets inserted with the same date before the first ecord of each similiar group from the second table
i think i will need also a sequence field added so o know which record comes first (probebly the record from the first table will get the seq of 0 (zero) (the rest seq number is not realy relevent since i am going eventuely to sum it per month)
i hope i am making my self clear plz ask me for further info in necesery (and forgive my spelling mistakes)
if you guys can help me solve these then i could move on to the next problem which is calculating runing totals for each month for each item in each warehouse (but thats for another thread)
I was looking to modify how the INSERT happens with regards to replication only to find my solution in the proc itself. When I edit the proc this is what I am displayed in SQL QA or EM:
create procedure "sp_MSins_dboMEETING";2 @c1 int,@c2 int,@c3 int,@c4 varchar(250),@c5 datetime,@c6 datetime,@c7 bit,@c8 char(1),@c9 datetime,@c10 bit,@c11 bit,@c12 bit,@c13 datetime,@c14 smallint,@c15 datetime,@c16 smallint,@c17 binary(8),@c18 bit,@c19 bit,@c20 bit,@c21 varchar(1000) as if exists ( select * from "dbo"."MEETING" where "MEETING_ID" = @c1 ) begin update "dbo"."MEETING" set "MEETING_TYPE_ID" = @c2,"MEETING_STATUS_ID" = @c3,"TITLE" = @c4,"START_DATE" = @c5,"END_DATE" = @c6,"PUBLISH_IND" = @c7,"GROUP_IND" = @c8,"PUBLISH_DATE" = @c9,"MY_ADVISORS" = @c10,"SUBMITTED_IND" = @c11,"ACTIVE_IND" = @c12,"CREATE_DATE" = @c13,"CREATED_BY" = @c14,"LAST_UPDATE_DATE" = @c15,"LAST_UPDATED_BY" = @c16,"DATE_INDEXED" = @c17,"ON_DEMAND_IND" = @c18,"NOT_REPORTED_IND" = @c19,"MAJOR_PROJECT_IND" = @c20,"MAJOR_PROJECT_COMMENT" = @c21 where "MEETING_ID" = @c1 end else begin insert into "dbo"."MEETING" ( "MEETING_ID","MEETING_TYPE_ID","MEETING_STATUS_ID","TITLE","START_DATE","END_DATE","PUBLISH_IND","GROUP_IND","PUBLISH_DATE","MY_ADVISORS","SUBMITTED_IND","ACTIVE_IND","CREATE_DATE","CREATED_BY","LAST_UPDATE_DATE","LAST_UPDATED_BY","DATE_INDEXED","ON_DEMAND_IND","NOT_REPORTED_IND","MAJOR_PROJECT_IND","MAJOR_PROJECT_COMMENT" ) values ( @c1,@c2,@c3,@c4,@c5,@c6,@c7,@c8,@c9,@c10,@c11,@c12 ,@c13,@c14,@c15,@c16,@c17,@c18,@c19,@c20,@c21 ) end
GO SET QUOTED_IDENTIFIER OFF GO SET ANSI_NULLS ON GO
Is the second version listed at the bottom like a comment or can it actually get called? I am going to script out all of these procs and save them, and then remove the first version (listed at the top) and the use the second version since it does what I need. I just thought it was interesting to see two stored procedures in a single definition, never seen the "PROC_NAME";2 notation, have you? If so please tell me what it does, is it just a way to create a second version of the procedure in a comment type fashion or is it used another way?
We are using SQL Server 2000 database (with sp3) and recently we facedan interesting issue with full text search.According to SQLServer help page"AND | AND NOT | OR Specifies a logical operation between two containssearch conditions. When <contains_search_condition> containsparenthesized groups, these parenthesized groups are evaluated first.After evaluating parenthesized groups, these rules apply when usingthese logical operators with contains search conditions".Our "contains_search_condition" consists of a proximity_term and eachcontains_search_condition is combined with logical OR operator.When we execute each proximity term separately we are fetching correctset of records. Here are the SQL statements:select COUNT(*) from TABLE_NAME WHERE CONTAINS(COL_NAME, '"JOHN" NEAR"JANE"') (this returns 20 records)select COUNT(*) from TABLE_NAME WHERE CONTAINS(COL_NAME, '"JOHN" NEAR"DOE"') (this returns 10 records)If we join these records with an OR operator and pass them to aCONTAINS clause, we are fetching records that are larger than thesummationof records that correspond SQL statements given above. This reallycan't happen. Here is the corresponding SQL statement:select COUNT(*) from TABLE_NAME WHERE CONTAINS(COL_NAME, '(("JOHN"NEAR "JANE") OR ("JOHN" NEAR "DOE"))') (this returns 80 records)Furthermore, if I write the same statement in terms of two CONTAINSstatements with an OR operator, then I fetch correct number of records.Corresponding SQL is:select COUNT(*) from TABLE_NAME WHERE CONTAINS(COL_NAME, '("JOHN"NEAR "JANE")')OR contains(DOC_INFO, '("JOHN" NEAR "DOE")') (this returns 25records)Does anyone know if there is a bug in SQLServer 2000 Full Text Search?I will appreciate if anbody can shed somelight into my problem