SQL Server 2005 Non-ansi Joins: Any Easy Solutions?
Jul 20, 2006
My company wants me to research and flags or registry tricks that would allow non-ansi joins '=*' and '*=' in SQL Server 2005 with a compatiblity mode of 90 to be allowed.
The way I understand the situation is that in SQL Server 2005 with the database compatiblity set to 90, non-ansi join SQL such as the following would not work.
Select * from
Customer, Sales
Where Customer.CustomerID *= Sales.CustomerID
To work, the SQL above would have to be converted to ansi join SQL such as the following:
Select * from
Customer LEFT OUTER JOIN Sales
On Customer.CustomerID = Sales.CustomerID
Many hours would be spent browsing through millions of lines of code to find the non-ansi SQL and have changes made.
Does anyone know of any trace flaqs or registry entries that would allow SQL Server 2005 work in 90 compatiblity and still allow non-ansi =* and *= joins in SQL?
Hi everyone.. can anyone help me on how to solve my problem regarding on Select.. im using PB6.5 and running on MSSLQ2005 database.. i attached an image for your reference.. thnks!
Help....I have a DB I'm working with that I know doesn't work with theANSI-92 JOIN SYNTAX....I'm not sure how much this limits my ability todeal with the following situation, so I'm soliciting the help of aguru....I apologize for the lack of scripted table structure, but thisdatabase is embedded in an application that I have no true schema for.I have a crude diagram of the tables and some of the relationships, butI've managed to have manually mapped some of the fields in the tablesI'm working with.What I have is a table(A) that I need to join with 10 othertables.....I'm joining on an identifier in the (A) that may exist manytimes in any of the other 10 tables...and may not be in ANY of thetables.When I run this query:SELECTSAMPLES.PK_SampleUID,UDFSAMPLEDATA02.AlphaData,UDF SAMPLEDATA01.AlphaData,UDFSAMPLEDATA03.AlphaData,UDFSAMPLEDATA05.AlphaData, UDFSAMPLEDATA06.AlphaData,UDFSAMPLEDATA07.AlphaData, UDFSAMPLEDATA08.AlphaData,UDFSAMPLEDATA09.AlphaData,UDFSAMPLEDATA10.AlphaDat aFROM SAMPLES, UDFSAMPLEDATA01,UDFSAMPLEDATA02,UDFSAMPLEDATA03,UDFSAMPLEDATA05,U DFSAMPLEDATA06,UDFSAMPLEDATA07 ,UDFSAMPLEDATA08, UDFSAMPLEDATA09, UDFSAMPLEDATA10WHERE UDFSAMPLEDATA02.AlphaData<>' ' ANDUDFSAMPLEDATA01.FK_SampleUID=SAMPLES.PK_SampleUID ANDUDFSAMPLEDATA02.FK_SampleUID=SAMPLES.PK_SampleUID ANDUDFSAMPLEDATA03.FK_SampleUID= SAMPLES.PK_SampleUID ANDUDFSAMPLEDATA05.FK_SampleUID = SAMPLES.PK_SampleUID ANDUDFSAMPLEDATA06.FK_SampleUID = SAMPLES.PK_SampleUID ANDUDFSAMPLEDATA07.FK_SampleUID = SAMPLES.PK_SampleUID ANDUDFSAMPLEDATA08.FK_SampleUID = SAMPLES.PK_SampleUID ANDUDFSAMPLEDATA09.FK_SampleUID=SAMPLES.PK_SampleUID ANDUDFSAMPLEDATA10.FK_SampleUID = SAMPLES.PK_SampleUIDI return what appears to be the gazillion COMBINATIONS of all thefields in all the tables....they query doesn't even finish before theODBC driver I'm working with crashes my VBscript....Is there some way to take the multiple returned rows from a join andwork them all into ONE row per identifier?Any help I can garner would just make my week!TIA!J
Help!I'm trying to understand the new ANSI join syntax (after many years ofcoding using the old style). I am now working with an application that onlyunderstands ANSI syntax so I am struggling.My first (old style syntax) SQL statement below produces 60 rows:SELECT A1.CONTACTID, A1.LASTNAME, A1.FIRSTNAME, A1.ACCOUNT,A6.CITY, A6.STATE, A1.WORKPHONE, A1.FAX, A1.EMAILFROM CONTACT A1,ADDRESS A6WHERE A1.ADDRESSID=A6.ADDRESSIDAND A1.CONTACTID IN(SELECT A4.CONTACTIDFROM CONTACT_LEADSOURCE A4,LEADSOURCE A5WHERE A4.LEADSOURCEID = A5.LEADSOURCEIDAND A5.DESCRIPTION = 'some_description' )AND A1.CONTACTID IN(SELECT A2.CONTACTIDFROM TICKET A2,ENROLLHX A3,EVENT A7WHERE A3.STATUS IN ('R', 'Confirmed')AND A2.TICKETID = A3.EVXEVTICKETIDAND A3.EVENTID = A7.EVENTIDAND A7.CODE IN('AHS00','AHS01','AHS02','AHS03','AHS04','AHS98',' AHS99'))ORDER BY A1.LASTNAME ASCI am trying to convert this to the newer ANSI sytax. My second SQL statementbelow produces 67 rows (duplicates):SELECT A1.CONTACTID, A1.LASTNAME, A1.FIRSTNAME, A1.ACCOUNT,A6.CITY, A6.STATE, A1.WORKPHONE, A1.FAX, A1.EMAILFROM CONTACT A1JOIN ADDRESS A6 ON (A1.ADDRESSID=A6.ADDRESSID)JOIN( SELECT C.CONTACTIDFROM CONTACT CJOIN CONTACT_LEADSOURCE A4 ON (C.CONTACTID= A4.CONTACTID)JOIN LEADSOURCE A5 ON (A4.LEADSOURCEID =A5.LEADSOURCEIDAND A5.DESCRIPTION ='some_description' )) AS C1 ON C1.CONTACTID = A1.CONTACTIDJOIN(SELECT C2.CONTACTIDFROM CONTACT C2JOIN TICKET A2 ON (C2.CONTACTID =A2.CONTACTID)JOIN ENROLLHX A3 ON (A2.TICKETID =A3.TICKETID AND A3.STATUS in ('R', 'Confirmed'))JOIN EVENT A7 ON (A3.EVENTID = A7.EVENTIDAND A7.CODE IN ('AHS00','AHS01','AHS02','AHS03','AHS04','AHS98',' AHS99')))AS C3 ON C3.CONTACTID = A1.CONTACTIDCan anyone shed some light on what I am missing?cheers,Norm
Why is it that SQL joins (*=) run a little faster as opposed to ANSI joins(LEFT JOIN...)? Aren't they supposed to be almost identical?
The issue is this: we are promoting using ANSI syntax for the obvious reason (future versions of SQL Server may not support SQL Server syntax; portability, etc.)
However, the problem is the speed. What have others done about this? Do you use ANSI syntax or SQL syntax? HOw true is it that future SQL Server versions may discontinue support for the '*=" and "=*' join operators.
A question for everyone: With the introduction of SQL 2005, we now have to use ANSI-92 T-SQL Syntax and I was wondering if anyone had written a tool to convert queries from old ANSI SQL to the new syntax.
We have some code that has to change for the outer joins, but we also have a lot of code that should change for the inner joins. It doesn't seem that difficult to write something that parses an old piece of code and at least suggests a new version. Especially if the conversion code wasn't SQL code.
I've been using this syntax for years on SQL Server and now comes the time to convert to SQL 2005 (90 compatibility). This syntax returns four rows. Basically it returns one row for each servername/component/context/property/value even when there does not exist a property of 'fff' since it's a left join:
Code Block select t1.* from tblconfiguration t1 ,tblconfiguration t2 where t1.component = 'AdjProcessUtility' and t1.servername *= t2.servername and t1.component *= t2.component and t1.context *= t2.context and t1.property = 'proc' and t2.property = 'fff'
When the converted (using SQL enterprise Mgr) runs it returns no rows:
Code Block SELECT t1.* FROM dbo.tblConfiguration t1 LEFT OUTER JOIN dbo.tblConfiguration t2 ON t1.ServerName = t2.ServerName AND t1.Component = t2.Component AND t1.Context = t2.Context WHERE (t1.Component = 'AdjProcessUtility') AND (t1.Property = 'proc') AND (t2.Property = 'fff')
I don't really see how to change this query to make it work. I've searched the web and I really don't see any examples of left joins which use more than one column.
Here's the table definition:
Code Block CREATE TABLE dbo.tblConfiguration ( ServerName VARCHAR(30) NOT NULL, Component VARCHAR(255) NOT NULL, Context VARCHAR(255) NOT NULL, Property VARCHAR(255) NOT NULL, CONSTRAINT PK_tblConfiguration PRIMARY KEY NONCLUSTERED( ServerName, Component, Context, Property ), Value VARCHAR(255) NOT NULL )
I use this table to define reports and there attribues. The rows repeat themselves except for the Property and Value columns Here is some of the data:
I am on a project to develope an route finding system that search for the optimal route to stick with for users of the system. The current version that i've done and successfully run is using normal database access in MS SQL 2005. I stored nodes information in the database and the application will query them using normal "select" clauses and return a datatable object to the application. The result is rather slow cause by the multiple access to database server to query. The application used 8 second to look for a short route withour cosidering lots of calculation of traffic information that i will use later. Any comments on the architecture or approach to switch my algo to T-SQl?
When I was using Enterprise Manager (SQL 2000) and stored my query result (to file), it was stored in Ansi Encoding.
After I upgrade to SQL Server Management Studio (2005), it seams that when I store the query result to file it's Unicode Encoded. This give me a lot of trouble, when I use other program who I must read this file. For small result set, I can open it in Notepad, save it as ANSI, and then use it in my other program. When the result set is so big that Notepad not can help me, I can't use the stored result set.
There must be a way to store my Query result in Ansi encoding, but I don't know how I can do it. Hope some one know, and can help me solve this big problem.
At my current employer we are struggling with the best way to manage security and deployment of a project that contains databases, SSIS, SSAS and SSRS components, using configurations.
Environment (Dev): 3 SQL Server databases, all using mixed-mode security, using SQL Server security credentials. 12 SSIS packages; one master package, eleven child packages, 3 shared data sources 1 SSAS database; one cube, 15 dimensions, three referenced data sources from the SSIS project (in same solution) 6 SSRS reports, one data source to cube (not shared- doesn't appear SSRS can share datasources among other projects in the solution? Why?)
Everything runs fine in development. Now comes the tricky part.
Deploying SSIS and SSAS into production environments:
-Packages use XML config files for connection strings to three relational data sources. -Deploy to SQL Server storage. Deploy wizard copies package dependencies (including XML config files) to default location set in INI file. When I do this, no config file shows up in remote server (remote server not set up identical to local, so directory does not exist. Need UNC path?) So, being a developer with no "special" permissions on the PROD server, what security permissions is allowing the deployment wizard from copying files to this location on a production server? -Using a deploy script using dtutil doesn't copy the SSIS dependencies. Is this matter of using COPY or XCOPY to copy the configuration files to the dependency location? Again, in real-world practice, do developers typically change this location in the INI file to another location, or stick with the default. In either case, how does security work that allows files to get copied to the remote folder? (i.e. manual, or SQL Server manages this file folder permission through some other magic) When using SSMS and running the package after being deployed on the remote server, if the config path is the default (e.g. C:program filesMicrosoft SQL Server90DTSPackages...) it appears to be read from the local machines directory rather than the remote machines directory path (do I need to use UNC paths? The wizard doesn't give this option it seems) -When scheduling the job from SQL Agent, does the proxy account need permissions to the folder the config files sit in? -What about the roles security on the packages themselves? Where does the server roles come into play (dtsltuser, dtsadmin) -Because the SSAS project uses connection references to the SSIS project in BIDS, and SSIS project uses configurations, will SSAS pick up on these connections? -What about impersonation levels for SSAS? Leave all data sources set to default, and set the database impersonation level to "UseServiceAccount"? What if the developer is not the same as the OLAP administrator on the production server? In this case, Use Service Account isn't an option, and neither is the current users credentials. -SSAS database also has security for Full Control, but still doesn't prevent security at the data source level within the database (talking about impersonation level, not source db credentials) -How can SSRS connections leverage other shared connections?
As you can see, there are a ton of security considerations, none of which are intuitive and can be configured multiple ways and actually work (and a ton of ways that won't work).
I need a simple cheat-sheet about each step to take to configure this so multiple developers can work without interruption, hot-deploying SSIS, SSAS, and SSRS changes into different environments (QA, PROD).
hi. i dont understand what they mean when they say developing oltp solutions. can anybody pls explain it to me. also does anyone know what ways there are to develop sql oltp solutions using SQL 2005 reporting services, OLTO, Excel Services. as well as any good tutorials for it?
Hello everybodyI work at a company in Iceland and we have developed a 3-tier solutionwritten in ASP - Visual Basic - MSSQL2000, 4 companies are using thesolution almost constantly and accessing it through a browser. Theconnection has never gone down (yet) so that it has affected ourclients but we are thinking of how be able to run the solution locallyat every place and then create a replication to a main server that ishosted at our place.My question is: Does it affect speed for the clients that are usingthe solution or is there a better way of doing this?The solution is a ticket sale system and our clients use it every dayand people that sit at home should be able to order ticket online.Because of that we can't update the database every 5 minutes or 15minutes because we don't want a double booking in the same seat.Any help appreciated!- Sindri
I guess the Subject line sums it all up, but I need some experienced explanation of what do a solution and a project represent, and how do I use them to my advantage.
Is a solution an entire database? If so, how can I create a solution from my existing databases?
Are either of them a way to collect together scripts etc which will be run against a production database when the solution is rolled out?
What is a project? Is it a single set of scripts related to an upgrade to a database? If so, can it be executed as a single entity? How is the sequence of execution controlled?
And so on and so on....
2005 is such a step backwards for DBA's with all the features we used to have and now don't. If it wasn't for the fact that MS will eventually stop supporting 2000, I frankly see no incentive to upgrade myself.
Ok, OK, flame off.
Can anybody suggest some resources which might give me some insight into these questions?
1) Do I have to install publishing on both servers (A and B) even though one will be publisher and the other will be subscriber.
2)a. Can named pipes be used for communication between these two servers which are on the same domain but not on the same network. Why or why not, whatever the answer may be? b. If I use TCP/IP, it the connection set up using the client configuration utility? How is the connection string set up in this case? c. Suppose the publishing server was not using Net-Beui. Could this pose any problems for communication. (is using lmhosts sufficient in this scenario)
3) I have set up a (remote) SQL Server to be a Publisher/ Distributor. Both SQL servers have been configured to be remote servers relative to each other. Following are the steps I have carried out to set up replication:
______On the Publication Server (a remote server) I went to Server --> Replication Configuration ---> Install Publishing
Next, I chose a local distribution server. I think that the instdis.sql script ran fine because the distribution database was installed successfully.
___Next, I went into Manage Publications from Server menu to set up the publications.
_____________When I went the subscription server to subscribe to the published articles, I got the following error message:
Error 14093: [SQL Server] You must be System Administrator (SA) or Database Owner (dbo) or Replication Subscriber (repl_subscriber) to execute the stored procedure.
According to this article from last year:http://searchsqlserver.techtarget.com/tip/0,289483,sid87_gci1251149,00.htmlThese are the main options:* Merge Replication* Bi-directional Transactional replication* Immediate Updating* Queued Updating* Peer to Peer* RDAAre there any new alternatives that have popped up over the last year? Are all of six above still good options based on needs?We currently have a three server topology using merge replication.ServerA (App1DB) <--> ServerB (App2 DB)ServerA (App1DB) <--> ServerB (App3 DB)ServerA (App1DB) <--> ServerCServerA supports 1 intranet application using 1 DBServerB supports 2 extranet applications using 2 DB's (1 per application)ServerC is our DW server that we have installed a Search DB which is used by all applicationsPrior to our "upgrade" to merge replication we were using 1-way Transactional Replication so our topology looked like:ServerA --> ServerB (App2 DB)ServerA --> ServerB (App3 DB)We also had linked servers between ServerB and ServerA as well as between ServerC and ServerA to update data on ServerA. We would simultaneously update/insert the tables on ServerB/C and create custom stored procedures to handle the data already processed from the subscribers.With our new implementation we are seeing more latency as well as locking since merge replication is not running off of transaction logs anymore.My main question is would we see an increase in performance and less locking as a result of a topology like this:Master <--> ServerA (App1 DB)Master <--> ServerB (App2 DB)Master <--> ServerB (App3 DB)Master <--> ServerCWhere Master is a server and DB supporting no applications (hence no OLTP). Would latency be the same/better/worse? Should we stick with our current implementation and just performance tune it?A secondary question I have is given the bidirectional replication options above did we choose the best one for us? These servers are all on the same network hosted by the same provider over Gigabit Ethernet (I assume). I think we have the polling interval set at 5 seconds and we are thinking of moving it to 10 seconds at most. Real-time latency is not critical to our business but it would be a "nice to have". For conflict resolution we are keeping it simple, whichever was inserted/updated last "wins". It looks like Bi-directional Transactional replication might be a better option for us. Would it give us the autonomy we are looking for? Any major "cons" to using Bi-directional Transactional replication over merge replication (beside scalability). Scalability may come into play a few years down the road but for now it is not a high priority. Also would the Master model described above using Bi-directional Transactional replication be a successful implementation?ETA - One thing merge replication gives us is autonomy between our application servers, particularly when ServerA needs to come down for upgrades, the applications on ServerB can still function without any dependencies like we had before with 1-way transactional replication with linked server calls.
Does anyone know where I can find a Northwind end to end database solutions (examples) written in ASP.NET (VB). I would like to reverse engineer this project to learn more about ASP.NET?
My warehouse app employs a distributed architecture. Extractions from disparate (wildly so) systems, and transformations and loads into a standardized schema are performed at various locations close to the source systems (both physically and "logically" speaking). There are security and other reasons for this. However this causes some related design and implementation challenges for the ETL processing.
For one, the ETL processes must be successfully operated by non-technical medical administrators, who actually have little interest in the application and sometimes even the analytics produced by the system, who have other more pressing day to day work they want to be doing, in organizations where turnover is high,training is spotty, and LANs are fragile and often congested.
So, real-time feedback to the operator during processing is pretty dern important. I have built a fairly sophisticated GUI (using .Net forms inside a script component) for the operational interface-input boxes just wouldn't cut it).
But that interface is lacking real time feedback as to processing progress at runtime.
Anyone got that T-shirt yet? I'm thinking I need progress bars and real-time task and component progress reports. Also. is there a way to capture the built in logging output in real-time?
I have been informed that all my keyword search solutions are susceptible to SQL injection attacks. Does anyone have links discussing basic ' multiple ' keyword search solutions? I would think this is a very common routine (perhaps so much so than only newbies like myself do not know it). I have read the posts about escaping ', doing replace " ' ", " '' ", using parameters and yet every multiple keyword solution I come up with is said to be injection prone. Example: visitor enters: Tom's antiquesinto a TextBox control and the C# code behind securely generates the below call to the database. SELECT L_Name, L_City, L_State, L_Display FROM tblCompanies WHERE L_Kwords LIKE '%' + 'Tom's' + '%' AND L_Kwords LIKE '%' + 'antiques' + '%' AND L_Display = 1 RETURN I understand that concantenting string parts using an array and then passing the sewn together string to a stored procedure exposes it to injection. I hope that my single keyword routine below is secure, if it is not then I am not understanding how parameterized SP are supposed to be constructed to protect against injection.string CompanyName;CompanyName = TextBox1.Text;PROCEDURE CoNameSearch @CompanyName varchar(100)AS SELECT DISTINCT L_Name, L_Phone, L_City, L_State, L_Zip, L_Enabled, L_Display FROM tblLinksWHERE (L_Name LIKE @CompanyName + '%') AND L_Enabled = 1 AND L_Display = 1 ORDER BY L_NameRETURN
I'm trying to figure out what solution (replication, mirroring, clustering) would work best for me.
I have been reading many articles in BOL and in this forum. Most talk about getting data TO a backup/standby/subscriber, but I can't find a lot of info regarding getting the data BACK after a disaster is over.
We have a main office and a disaster recovery facility. Most of the time there are no data updates at the disaster location. So, I need to get data to the disaster facility via WAN (latency is not a huge issue - end of day syncing is fine) for backup purposes. In the event of a disaster, the main office will be offline and data changes will happen at the disaster site. When the disaster is "over" and we return to the main office, what's the best scheme to reverse the data back to the main office to start business again? We are a financial company, and have gigabytes of relatively static data. Most changes are current day. So, to snapshot a 100GB database when I know only a few hundred MB changes a day doesn't seem feasible to me.
Most replication scenarios (at least from what I see) can't easily "reverse" the replication after a disaster situation. I'm looking at merge replication on a schedule which seems to look good, but was wondering if anyone else has any ideas or suggestions?
We find that a delete command on a table where the rows to be deleted involve an inner join between the table and a view formed with an outer join sometimes works, sometimes gives error 625.
If the delete is recoded to use the join key word instead of the = sign then it alway gives error 4425.
625 21 0 Could not retrieve row from logical page %S_PGID by RID because the entry in the offset table (%d) for that RID (%d) is less than or equal to 0. 1033 4425 16 0 Cannot specify outer join operators in a query containing joined tables. View '%.*ls' contains outer join operators. The delete with a correleted sub query instead of a join works.
Error 4425 text would imply that joins with view formed by outer joins should be avoided.
I'm about to start on my first reporting services project, but before I mess it up, I'm looking for some guidance on how best to achieve my mission. Here's what I'm looking to achieve:
I have a datacentric application (SQL Server 2005 Express w/ Advanced Services backend) in which I want to build about 50 "canned" reports for the end users. I want to build the reports utilizing server mode so I can take advantage of some of Reporting Services advanced features. I'm not sure what the best practice would be to build the reporting services project. Is it better to include the report project as another project within the application solution? Or, should I build the report project independent of the application solution? What are the pros and cons of doing it either way? How does including the report project build if it's included in the application solution? How would a ClickOnce deployment deploy the report project to the report server?
My ultimate goal would be to have an "off-the-shelf" software solution that includes an installation package consisting of the application project and report project. Is it even possible due to the Reporting Services architecture to achieve an install in this manner with ClickOnce, Windows Installer, or Installshield? Or, is building the report project indepedent of the application project and deploying the reports to the report server "manually" (i.e. deploy within the report server project) the only solution?
SQL Server 2000Howdy All.Is it going to be faster to join several tables together and thenselect what I need from the set or is it more efficient to select onlythose columns I need in each of the tables and then join them together?The joins are all Integer primary keys and the tables are all about thesame.I need the fastest most efficient method to extract the data as thisquery is one of the most used in the system.Thanks,Craig
I installed Visual Studio 2005 and an SQL Server 2000 enterprice. I called my SQL-Server instance "PALAGSQL". My computer is called "PEARL" when I try to create a connection in VisualStudio2005 ,I put Server-Name : "PEARL/PALAGSQL" .......but I receive error 40 "Could not open a connection" What did I miss ? What shall I do.
This Q is so easy I can't find the answer anywhere! Why is SQL Server called 'Server' ? I understand it is a lot more capable and robust than Access but where does 'server' come into it. I currently use ASP and Access for dynamic websites but it is time to move up a notch. Do I just buy SQL Server, make a database, upload it to the same place as before and hey presto? can I run lots of websites (on different domains and servers) from databases created with my single license standard edition? Thanks M
Is there a way to set up the SQL Server Log files so that it automatically creates a new one (or overwrites the old one) when the old one is full?
We keep having people shut out of our web site due to the log file being full. Since we have no DBA I volunteered to try to find out how we can avoid this faux pas in the future.
I have been using SQL Server 2K for a couple of months now and I have been entering data into my tables through Enterprise Manager. I do this by selecting the table, right-clicking on it, and selecting Return All Rows.
After months of use, this method of entering data has become tedious and cumbersome. Is there an easier way to enter data into SQL Server? I don't know how to program Visual Basic, but I know how to create dynamic websites.
Microsoft Access offers the ability to make Forms, so that data entry is easier. Can SQL Server offer anything similar? Maybe SQL Server 2005 offers a GUI for data entry?
I know about @@version and SERVERPROPERTY, but I am looking for a query that will determine between SQL Server 2000 and 2005. I need to write an if/then statement and don't want to have to add all of the 9.xxx.xx versions for 2005 and keep that updated with each hotfix that comes out. -Kyle
1) What is the restriction on the max no. of tables that can be used in any one query in v7.0? In 6.5 it was a max of 16 :( 2) Are non-ANSI-style joins permitted in v7? Just that we're thinking of upgrading but 50% of our stored procs/views have the old *= syntax Thanks, Brad Carr
ok people, this is getting seriously frustrating! Please help!
As mentioned in a previous post, one of my batch jobs is printing fields with padding added, even when the table column is defined as varchar.
I've been to the knowledge base, read the article on ansi padding, ran the test scripts. But when I ran the select to display the columns, THE OUTPUT FOR BOTH TABLES WAS IDENTICAL!!! Apparently the SET ANSI_PADDING ON/OFF option had no effect!
What am i doing wrong? What is missing? Do i have to run the Set Ansi Padding option in the Master DB context? Have I unknowingly overridden the option somewhere else? Must I brush up on my COBOL for a career change?