I'm still a database newbie so I would like to solicit thoughts about
the smartest way to do something in sqlserver.
My company has a web application that we customize for each client.
We can do this because everything is database driven. We have
database tables that contain our HTML and database tables as well as
some standard tables for each database. We have an in house app that
lets us tweak both of these things and creates a new web site and
database tailored to each project.
Each of these sites has a table that stores a schedule are clients
use.
The records in this schedule table change when information in other
custom generated tables change.
My company currently uses a legacy foxpro app to update the schedule
table.
The foxpro app contacts sqlserver, reads a table with a list of tables
and scheduling information to check, checks each of those items and
updates the schedule table.
I would like to lose the foxpro app.
At first thought.........as a database newbie.......putting triggers
in each of the tables to update the schedule when something changes
seems the way to go.
However, since we change a part of the schema ( we have an app that
generates the database tables unique to each client ) for each client
I would like a scheme that would not involve having to create a
different trigger for each new table.
I would also like something that updates in real time. Right now the
foxpro app is executed once a day.
I was thinking of making a large stored procedure and putting an
identical call to that procedure in each table.
Each table would have the same trigger in it that would get fired when
the record was altered. It would call the stored procedure with
relevent arguments to update the schedule.
Does this sound like a smart way to solve this problem or am I not
thinking "database enough"?
For intranet development, our DBAs are asking web developers to use fixed domain NT ID accounts instead of SQL accounts to connect to backend databases in all web applications. We are: Windows XP workstations in a 2003 Active Directory Topology.
I don’t think that this is a good idea (I could say more:) but I have found very little to no information on this subject. So I ask you guys... What are your thoughts? Why or why not?
Hi,I'm in the process of designing a DB (typical management system DB; 2transaction tables and about 5 look-up ones )for one of the departmentsin our company. The user wants this DB and thusly the client (forms,reports..etc.) solely for his department. However, I do expect soonafter deployment that other users want similar DBs and clients,therefore, facing problem of integrating such DBs, if companymanagement wants to implement it as enterprise DB. My question (may beyou could also tell me about other groups specialized in these kind ofissues), how should I create such DB? Should I create extra look-uptable for departments and have each one with its own ID and link it tothe main transaction oneMTIA,Grawsha
I noticed that the current SLQCe driver does not offer support for the APM(Asynchronous Programming Model). Are there any plans to do this in the future? In light of the lack of APM functionality doe anyone have any ideas or thoughts on how async operations could be done, or if they are even needed in the context of applications that use SQL Ce
I was hoping to elicit some feedback on a trend I am seeing in the Portal market, and specifically with SharePoint development.
If you are not familiar with SharePoint, there is a data table abstraction within SharePoint called a "List". Lists are used for storing data (duh!). However, they are built using the SharePoint front end, and the data entered into all lists is stored in a few tables in the SharePoint content database.
What I am seeing happening is SharePoint gurus reccomending AGAINST storing your relational data within database tables, and within SharePoint lists instead. I am not sold on this approach, and it actually makes me think we are taking a step backwards with regards to persistent data storage and best practices.
- Lists cannot be natively related to one another, however they support "lookups" - Anyone can create a list...and repeat the same data all over the enterprise. - Lists are maintained in two tables within the SharePoint content database using meta-data patterns. - Portals contain a multitude of sites. Users and portal admins can create lists all over the place, thus spreading related data over a wide swath of the enterprise.
Is it just me, or are SharePoint pundits absolutely CRAZY to be recommending persistent data storage using lists? I see nothing but problems arising from this approach.
I apologize beforehand if you have not worked with SharePoint and Lists, as this post may not make much sense to you. ;)
Thought I should post in the newbie forum for a while, instead. :-)
I have a couple of scripts that I've generated that drop a couple of system stored procedures and recreate them. I'm not sure why I did it in the first place, but I think it was that it wouldn't let me run an ALTER statement on them. Specifically, I'm now looking at sp_add_operator. I changed it to a 500 character email field instead of whatever it was (100, I think.)
/* Explanation: Why did I do that? SQL Mail is prohibited here, so I'm using CDO_Sysmail to email myself and the developers if a job fails. The list of people to email is determined by the owner of the database, who is also an operator in SQL. I get the list of emails from the email field of the operator properties. Hence, I need a bigger email field. Yes, I now know it would most likely be better to create an ADMIN database on each server for this kind of stuff. (Thanks to Tara for that blogged suggestion.) */
While I will probably go back to the default stored procedure, this got me to thinking: when would it be better to use an ALTER statement on a SProc rather than to do a DROP and CREATE?
Folks, I have a quick question that I would very much appreciate somefeedback on. We are a not for profit charity organization that has decidedto develop a software in-house to manage our volunteers. We have SQL andthat makes the most sense from a database solution but we have some issuessurrounding the choice of the development language. Some have suggested100% java while others say Visual Basic. The head of our team has suggestedwe do it in Cold Fusion since this will be an internet based application andI guess I would very much like some feedback on that choice. We have about 5organizations that we will tie into this system with about 5000 userslogging in once per month.Any suggestions or comments would be greatly appreciated.CheersWade
I have a database with a dozen or so tables. No table constraints. Logic is all in stored procedures.
I have several Excel spreadsheets of data to import into the database, one speadsheet to a table. Each spreadsheet has additional data(columns) that each table has no interest in and should be ignored.
I would appreciate your thoughts on methods and best practices for loading this data to the database.
I am about to investigate SQL Server 2005 Express handling of XML. I am familiar with XML and XSL conversions and it seems to me that XSL conversion of Excel data to XML gives me a lot of flexibility prior to database import for shaping the data.
In short, importing data to the database from an XML source.
I am not famliar with SQL Server's XML capability and would appreciate thoughts on this while I look into it.
And of course alternate ways that I am overlooking.
I just learned I can deploy and schedule jobs to run SSIS packages (via job/sqlagent) without the Integration Service (agent) itself actually running alongside (or on) the server. (Double-click on manifest, deploy IS package to server, create job/job step to run IS package, watch it run even when integration service is completely disabled)
Other than convenient viewing, configuring, and RMC running w/in SQL Server Mgmt Studio 2005, why then do I need the integration service running on a production box at all? When do I really need the IS service itself?
In our (finance) world only (a) an act of God or (b) a DBA can touch production databases/servers. Allowing anyone to connect to yet another service - in this case, an integration service - to meddle w/ a package would be a no no, so...
1) Could I trouble someone for a concrete, critical reason why the DBA should enable it on a production server. Speed? Caching? Peace of mind knowing everything is piled onto and neatly running on the server?
2) On a more minor note, if I'm deploying a package to be housed solely w/in MSDB, is there anyway to prevent the prompt of a file location during deployment, i.e. the creation of an empty directory that would otherwise hold package dependencies if I were running it as a file?
We'd like to deploy only to MSDB (I know all the pros/cons w/r/t saving dtsxs to files v. msdb) and keep deployments clean (read: all in one place). DR is via SAN-to-SAN replication with, among other things, msdb cleanly getting replicated. We would very much like to avoid having to worry about (more) file/directories sitting out on a server share to be replicated to DR (it seems the default is to allow deployments to directories on the SQL server instance itself..ugh) Any architectural insights on this would be appreciated.
I like the new gig a lot. Real busy, smart folks and I have been in high demand since 5 minutes after my butt hit the chair. I already have code in production.
Anyhow, we have a security situation on the sql servers I pointed out on my first day. So they want me to roll everything over to Windows Authentication and give the developers and report writers more restricted rights inside SQL Server. So they have NT Groups for different kinds of users and all of that jazz and I layed on the typical stuff about using NT groups vs individual accounts and ease of admin vs granularity of control. Well the boss came back and said he wants ease of admin and granularity of control over security. So, does anyone have any fresh thinking on turning my eitheror into an AND.
I've been trying to get a range of values out of my SQL Server 2000 db without sucess. The field in question has the data type of char(8) and looks like this:
House_numbr_pub (leading spaces in front of each value) 140A 140 141 142 143 144 145 146 147 148 149 150 151 . . . 14500 . . .
Does anyone know how write a sql statement that will return 140-150, but excluding the ' 14500' and 100-1000. I tried the following where clause to return a range between 100-1000.
WHERE (cook.STREET_PUB LIKE 'lincoln%') AND (LEN(LTRIM(cook.HOUSE_NUMBR_PUB)) BETWEEN 3 AND 4) AND ( (LTRIM(cook.HOUSE_NUMBR_PUB) >= '100') and (LTRIM(cook.HOUSE_NUMBR_PUB) <= '1000') )
This where clause only return two records (100 and 1000). I want it to return 100-1000.
I also tried the following where clause:
WHERE LTrim(cook.HOUSE_NUMBR_PUB) like '1[45][0-9]' OR LTrim(cook.HOUSE_NUMBR_PUB) like '1[45][0-9]'
However, building this on the fly with .net will take some effort if someone is trying to search range 1-10000000.
Is it possible to do the following without cursors or creating an identity column:
I have a table from legacy data with ~ 1 million records. I need to insert this into the new table which has a unique varchar(11) key. For the new system this key is generated by calling a SP that returns the next key in sequence. To put the legacy data records in the same table I want to first create a new column at end of legacy data table and populate this using SQL without going thru using cursor and calling the SP for each and every record to get a unique varchar(11) key.
Hello forum. I have a problem that is kill me. Initial dates: a table (Tbl_1) to collect dates from users (within form in VB), a view (View_1) to compute some columns (is a part from business€™ logic) and a table (Tbl_2) designated for a trigger. I will try to resume the contents of above table€¦ create table Tbl_1 ( id int not null, code varchar(10) null, TBO int not null, -- (Time between Overhauling)..hours (life cycle) T_Hrs_AtLastOvh int null, -- total running hours from last overhauling TRH int null, -- total running hours (the life of equipment) ) GO Create view View_1 as select code, [TBO]-([TRH]-[T_Hrs_AtLastOvh])as HrsTo_NextOvh from Tbl_1 GO create table Tbl_2 ( id int not null, code varchar(10) null, Start_dt smalldatetime, Stop_dt smalldatetime, ) GO --drop table Tbl_1 --drop view View_1 --drop table Tbl_2
Let to insert some dates into Tbl_1:
Insert into Tbl_1 select 1,'cod1',2000,2500,3000 union all select 2,'cod2',3000,4000,7000 union all select 3,'cod3',1000,2000,2000 union all select 4,'cod4',1500,3000,3000 GO The result of View_1 is in Fig.1:
Fig.1
cod1
1500
cod2
0
cod3
0
cod4
2500
cod1
1500
cod2
3000
cod3
0
cod4
2500
Fig.2
The operator perform requested job for the equipment and the life cycle starts counting again. Suppose to have:
Update Tbl_1 set T_Hrs_AtLastOvh=7000 where id=2 GO The result of View_1 is Fig.2. I wish to insert (within trigger) into table Tbl_2 all codes that have [HrsTo_NextOvh]=0 from View_1 and automatic to record the date when the record is done with a propertie like €™starting job€™. After the operator executed the job, he will update the Tbl_1 (the result is in Fig.2) and the trigger has to record this process with the propertie like €˜completed job€™. Depending by the time between overhauling and the operating hours of equipments, this task happens more or less often. My intentions are to record the time requested to executed a job and to make a history of events.
Any suggestion to solve my problem is full apreciated.
Hi, tell me please how I can trace the modification on the table such as "insert" record into one and syncronize mirror table at the same time once the insert has happend, BUT - no indexes no trace jobs, no any modification or objects on the master table... ha?
I'm a bit stuck with this one... hope someone can help.
I'm trying to develop an application that will run on a pocket PC with Windows CE 4.2
I'm using .Net 2003 and the application is in VB.Net.
I can run the application on the pocket pc fine (ie. form paints, buttons work) , until I need to connect to Sql DB on the server.
When I try to create a connection object (Dim dbconnection As New SqlClient.SqlConnection)
I get an error stating .. "This application (test.exe) requires a newer version of .Net Compact Framework than the one installed on the device" .... "could not load System.Data.SqlClient.SqlConnection from assembly System.Data.SqlClient Version=1.0.5000.0"
The version that it is looking for is. 1.0.5000.0 . The VS2003 is using this version.
I've downloaded the compact framework v1. sp3 , ran all the cabs on the Win CE device ... it looked that it installed fine.... but the problem still exists.
I have an 05 VB.NET windows application that will be used as a smart client for our folks in the field. The windows application includes 05 SQL Server Express. I have included in the Data Sources of my project and attached file going through the wizard Microsoft SQL Server Database File (SqlClient) ='s (myfile.mdf) and then selected all tables, views, stored procedures, and functions... the corresponding myfileDataSet.xsd with the myfile.mdf are now located in the root of the project. I now recompile the project without error and go to the properties section Publish tab... select the Application Files button and myfile.mdf Publish Status is set to Include and the Download Group set to Requried. With this in place I right click on the myfile.mdf from the Soultion Explorer and under the properties section have set the build action to compile and use the copy always setting for the Copy to Output Directory.
After the publish is completed on the client machine... install for Windows Installer 3.1, SQL Server Express, and the Windows Application contains no data but everything else works fine.
My problem is that I need to attach the myfile.mdf to the new SQL Server Express instance on the client machine during the installation process so that when the application fires it will be pointed to the above location on the client.
Any ideas... scripts... includes for an ApplicationEvents.vb on how to do this? Thanks a ton... :)
Kind regards,
BillB
Your mind is like a parachute.. It has to be able to open, for it to work.
declare @ContactId as integerset @ContactId = 5select *from Person.Contactwhere ContactId = @ContactIdOR @ContactId = -1If you run this in SQL 2005 on the AdventureWorks database,why the logical reads is 561Table 'Contact'. Scan count 1, logical reads 56and not 2 when you run without the second OR condition:declare @ContactId as integerset @ContactId = 5select *from Person.Contactwhere ContactId = @ContactIdHow can i use the same SP and either get one record returnedby passing the ID of the field, or pass a dummy parameter like-1 in order to get ALL the records returned.In this case even when i pass a parameter like ContactID = 5there is still a table scan (clustered index scan in this case)happening for the other OR condition.There's no method to tell SQL to start checking the first conditionwhether or not it is true then if it is false then check the second ORconditon. On the same topic does this mean all OR conditions areALWAYS verified regardless if one of them has already been determinedto be True?Thank you
First of all I would like to announce that this is my first time I post here.. However, I'm pretty sure that I'm in the best place to ask what I want. To cut the story short, I'm querying SQL database on a remote machine and having the result saved (mapped) to another table on another database on the same remote machine. The thing is the destination table was empty before the query was run the first time. I have been searching for some smart way so that when I modify the source tables that my query is based on, it doesn't affect except the modified rows. In other words, it should be like if the row is already there, do nothing. otherwise, it updates the existig record. else, it's a new record and it's inserted. I think what i need will include some coding for sure, yes? I don't know if i'm clear about the requirement or not though! but I know that you are experts and can direct me. Waiting for your valuable replies.
I would appreciate any thoughts/ideas on the following use case for the distributed service broker application we plan to migrate from our existing proprietary tcp based message protocol using database tables for reliability.
There are two ssb services running in separate sql server instances, each on a different server machine. For simplicity, let us assume the ssb endpoint names are SSBA, SSBB. SSBB is the Initiator of the Dialog while SSBA is the Target. Now the requirement is that if the underlying network communication between the two ssb endpoints(SSBA and SSBB) is broken or if the critical service SSBB is down, then processing of any incoming message into SSBA's queue from a third service broker service (say SSBEXPR) running within a SqlExpress instance should be delayed until SSBB is alive and network communication between SSBA and SSBB is established. In our existing implementation (wherein SSBA, SSBB and SSBEXPR are windows services) we use a combination of TCP socket disconnects and Heartbeat messages between SSBA and SSBB to determine the health of network connection and that of the SSBB service.
Now my understanding of how the underlying network connection for a ssb dialog works is that if there is no activity on a dialog for a certain amount of time then the underlying network connection is closed. Is there a way to specify the amount of time to say infinite value or something and thus change this behavior? My other question is how can one query the underlying network connection (i.e. a row from sys.dm_broker_connections) associated with a particular conversation? If none of this is possible, then any other patterns/ideas/approach is welcome.
I've created a simple application that uses a SQLCE 3.5 database. When I debug it SQLCE 3.5 is deployed to the emulator. However, I made a "smart device cab project" for my application and copied the cab file to my windows mobile 6 device and it does not deploy SQLCE 3.5. I don't see a way to specify the prerequisites of the "smart device cab project" like you can in a normal setup project. How can I get SQLCE 3.5 to deploy with my application...or even just get it on my device? I've tried installing it on my desktop with the device connected via active sync, but it doesn't install on the device like the compact framework did.
So in a previous thread I discovered that in order to actually subscribe to any publication, the publisher needs to be a well-known network name, requiring DNS resolution. You can't simply point a SQLExpress instance at an ip addressinstance and have it resolve the communications.
Trying to develop a smart client which will have data centric approach for storage of local data. The server is SQL Server 2000 and foot print database is going to SQL Server Express 2005. Is Merge replication a vaiable option. Can somebody guide me on this approach
Is there any other architecture proposed for Smart client arcjitecture where the data tranfer will be in a couple of GBs. Can somebody tell me more about SOA as well
What is the best way to compare two entries in a single table wherethe two fields are "almost" the same?For example, I would like to write a query that would compare thefirst two words in a "company" field. If they are the same, I wouldlike to output them.For example, "20th Century" and "20th Century Fox" in the companyfield would be the same.How do I do this? Do I need to use a cursor? Is it as simple as using"Like?"
I have created a Smart device application for Windows CE 5.0 device using Visual studio 2005 and i have added the smart cab project to the solution. When I add project out in teh appllication folder, the detected dependencies are not included. Instead dependent files appear with red circle in solution explorer.
My project uses .Net Framework 2.0 and SQL Server mobile 2005 and SQL Server mobile database file and some symbol files
I need included all of them in my cab project and when user clicks on it (in the device) it should install all of them.
Let me know why detected dependencies are not included in the Cab project and how can get all the above mentioned things to be installed along with my program
Cool place! Has anyone deployed SQL Express silently using one click and an attached a smart client DB from within the app ... would love to see some of the best practices or horror stories! Just kidding... :) I'm about to deploy a smart client using SQL Express and could use some tips from someone who has been there.
I have both vs2005 and vs2008 installed. I'm working with a .Net Compact Framework 3.5 Smart Device Project.
If I refrence the System.Data.SqlClient.dll (Version 3.0.3600.0 Runtime v2.0.50727) C:Program FilesMicrosoft SQL Server Compact Editionv3.5DevicesClientSystem.Data.SqlClient.dll
When I deploy the application I get an error ".Net Compact Framework v2.0 could not be found Please install it and run the setup again"
Studio is : Deploying 'C:Program FilesMicrosoft Visual Studio 8SmartDevicesSDKSQL ServerClientv2.0wce500ARMV4isql.ppc.wce5.armv4i.CAB'
We tested this on a PC without vs2005 and it seems to work fine.
OK. I give up and need help. Hopefully it's something minor ...
I have a dataflow which returns email addresses to a recordset.
I pass this recordset into a ForEachLoop configuring the enumerator as (Foreach ADO Enumerator). I also map the email address as a variable with index 0.
I then have a Execute SQL task which receives this email address as a varchar variable (parameter 0) which I then use in my SQL command to limit the rows returned. I have commented out the where clause and returned all rows regardless of email address to try to troubleshoot this problem. In either event, I then use a resultset to store the query result of type object and result name 0.
I then pass this resultset into a script variable to start parsing the sql rows returned as type object. ( I assume this is the correct way to do this from other prior posts ...).
The script appears to throw an exception at the following line. I assume it's because I'm either not passing in the values properly or the query doesn't return anything. However, I am certain the query works as it executes just fine at the command prompt.
My intent is to email the query results to each email address with the following type of data by passing the parsed data from the script to a send mail task. Email works fine and sends out messages but the content is empty. I pass the parsed data as string values to the messagesource and define the messagesourcetype as a variable in the mail task.
part number leadtime
x 5
y 9
....
Does anyone have any idea what I might be doing wrong?
I have a SQL Server in an internal network and need to "replicate" to an identical server and database in a DMZ. The DMZ can only receive files sent by a custom component and no port(s) can be opened in the firewall or standard FTP used.
I also need to minimise traffic (i.e. sending whole tables is not an issue as some contain millions of rows), replicate approx once hourly and allow record locking only (as opposed to DB locking/exclsuive access or table locking).
Any thoughts / experiences greatly appreciated otherwise I'm looking at putting triggers on all the tables to monitor changes and generate SQL statements for execution against the DMZ server.