How can I create a job in sql agent to create a new snapshot every hour?
I have, for eg a T-SQL that does it manually.
create database Snapshotter_snap_20070418_1821 on
( name = Snapshotter, filename =
'c: empSnapshotter_snap_20070418_1821.ss')
as snapshot of Snapshotter
Now, what I do NOT want, is to only have one copy, but rather to do this
every hour or two through out the day - and keep the old copies for some
time. (In that case, a DROP database, and a CREATE database <generic name>
is easy).
I seem to have a strange problem when applying a snapshot when the tables in the publication have been updated while the snapshot was being generated.
Say for example there is a table called RMAReplacedItem in the publication. When the snapshot starts being applied to the subscriber, a stored procedure called sp_MSins_RMAReplacedItem_msrepl_css gets created that handles an insert if the row already exists (ie it updates the row rather than inserting it). However, after all the data has been loaded into the tables, instead of calling this procedure, it tries to call one called sp_MSins_RMAReplacedIte_msrepl_cssm - it takes the last letter of the table name and adds it to the end of the procedure name.
The worst part is that this causes the application of the snapshot to fail, but it doesnt report what the error is, and instead it just tries applying the snapshot again. The only way i have managed to find which call is failing is to run profiler against the subscriber while the snapshot is being applied and see what errors.
I have run sp_broswereplcmds and the data in there is what is applied to the subscriber - ie the wrong procedure name.
All the servers involved are running sql 2005 service pack 2. The publisher and subscriber were both upgraded from sql 2000, but the distribution server is a fresh install of sql 2005.
Apologies for the simplicity of the question, but it reflects mycapabilities! I have the following sample fields coming from differenttables:LocationTimeDate (timestamp)DataI need to return the average of Data per Location per HOUR.Thanks.
I am trying to get hourly data on a table which was working fine on mysql but not on sql. how to use this in sql server managmenmt studio, i receive 'time' is not a recognized function name.and curdate() is not recognise function.
select [pa_number], [pa_surname] ,[pa_forename], sum(time(datetime) >= '07:00:00' and time(datetime) < '08:00:00') as '7.00-8.00 AM', sum(time(datetime) >= '08:00:00' and time(datetime) < '09:00:00') as '8.00-9.00 AM', sum(time(datetime) >= '09:00:00' and time(datetime) < '10:00:00') as '9.00-10.00 AM ', sum(time(datetime) >= '10:00:00' and time(datetime) < '11:00:00') as '10.00-11.00 AM',
First time here so please bear with me.Set up a DTS package to export data to an excel sheet on an hourlybasis. Problem is, it keeps appending to the same excel sheet.Any idea how to prevent that. All I want to accomplish is that everyhour, the latest data is in the excel sheet and the previous data isdeleted.Thanks in advance!
For a GPS utility project we are planning on extracting certain attributes from a huge "GPS Raw Data" read only database which we have access to containing GPS data from several years from several devices attached to vehicles.The data is time stamped. Where the time gap between pieces of data is more than 10 minutes, a new trip is instance is assumed and in our write access "Trip" database we create a new instance for the data clump with a new trip id along with the time range of the data. The process is to be run hourly to update the "Trip" database with new trips and append to overlapping trips. We've some questions:a) Is it easy to read from one database and write into another in c# hourlyb) How would one go about running a C# program automatically every hour on the server?c) Is there a better way to do this than an hourly update? (dynamically perhaps??)d) When querying the database and comparing the time stamps, how for instance would we go about identifying a 10 minute gap when the time/date is in the format "22/12/2007 11:25:00". I can't get my head around actually writing this - it's probably ridiculously simple
I have two servers, using SQL server 2000.I was asked for implementing hourly Backup 3 databases in one serverand restore those databases to another server.Could anyone give me the detailed steps to do that?Thanks a lot in advance!
Hello everyone,I have around 20 reports in an ASP web-application which connects to aSQL Server 2000 dB, executes stored procedures based on inputparameters and returns the data in a nice tabular format.The data which is used in these reports actually originates from a 3rdparty accounting application called Exchequer. I have written a VBapplication (I call it the extractor) which extracts data fromExchequer and dumps the same into the SQL Server dB every hour. Therunning time for the extractor is an average of 10 minutes. Duringthese 10 minutes, while the extractor seems to run happily, my ASPweb-application which queries the same dB that the extractorapplication is updating becomes dead slow.Is there anyway I can get the extractor to be nice to SQL Server andnot take up all its resources so that the ASP web-application users donot have to contend with a very very slow application during thosetimes?I am using a DSN to connect to the dB from the server that runs theweb-application and well as the other server which runs extractor.Connection pooling has been enabled on both (using the ODBCAdministrator). The Detach Database dialog gives me a list of openconnections to the dB. I have been monitoring the same and I havenoted 10-15 open connections at most times, even during the executionof extractor.All connection objects in the ASP as well as VB applications areclosed and then set to nothing.This system has been in use from 2002. My Data file has grown to 450MBand my Transaction Log is close to 2GB. Can the Transaction Log be aproblem. For some reason, the size of the Transaction Log does not godown even after a complete dB backup is done. Once a complete dBbackup is done, doesn't the Transaction Log lose its significance andcan be actually deleted? Anyway this is another post I'm doing todayto the group.In the extractor program,1) I create a temporary table2) I create an empty recordset out of the table3) I loop through the Exchequer records using Exchequer's APIs, addingrecords into the recordset of the temporary table as I go along.4) I do an UpdateBatch of the Recordset intermitently5) I open an SQL Transaction6) I delete all records from the main table7) I run a INSERT INTO main_table SELECT * FROM #temp_table8) I commit the transactionI hope that the information is sufficientThanksSam
I am using the following query in a view to retrieve the latest 24 hourly records for a site.This returns 24 hourly records for the last day of measurements at a Site.This works great. However, I now need to retrieve the latest hourly records from the current hour. For example, hours will run from 00:00 to 23:00 and if the query is executed at 15:00, I will return only hourly records for 00:00 to 15:00 etc. I believe I need to filter the result set or modify the query to exclude records greater than the current hour.
I am using the below script and I am getting data for 15 minutes interval. I would like to aggregate this data to hourly so instead of reading for 2014-01-01 00:15:00.000 and 2014-01-01 00:30:00.000 I want all the data aggregated for 2014-01-01 00:00:00.000 and then for 2 o’clock. how should I tweak this query to sum the interval values and display it?
SELECT r.MeterId, r.ReadingDate, r.Reading FROM MeterReading r, MeterDetail d, Building b where r.MeterId = d.MeterId and d.BuildingId = b.BuildingId and b.BuildingName like '%182%' and r.ReadingDate between '2014-01-01'and '2014-01-10' order by r.MeterId
I am trying to break down the content based of hourly basis. It works fine when there are values for that specific hour but if there are entries or values for a specific hour then it returns null instead of 0. How not to get null instead get zero.
Here is the code below:
With temp_exp As (Select pl.state,Cast(signeddate As date) As signatureDate, signeddate As DoneTime From contract c with(nolock) where c.signeddate>DateAdd(Day, Datediff(Day,0, GetDate()), 0) ) Select Sum(Case When CONVERT(varchar(8),DoneTime,108) Between '07:00:00' And '07:59:59' Then 1 Else 0 End) '8AM',
I have SharePoint 2010, which I have uploaded a PowerPivot model onto.
Currently it doesn't seem like I could setup the Data Refresh service to refresh my model more frequent than once a day. The Data Refresh configuration page looks like this:
Which doesn't show an option for anything more frequent than daily.
I have also tried to refresh the model's database directly on the Tabular SSAS instance (which SharePoint is using to store PowerPivot models) via SSIS or XMLA, but I get an error saying the tabular model is in "ReadOnly" mode, which I could potentially bypass (by detaching and re-attaching the model), but thats starting to sound abit too hacky.
Is there any way I could refresh my SharePoint uploaded PowerPivot model more than once daily?
This is the code I have written and I am trying to retrieve minimum count of PQpageId for every hour for a given date of range.
WITH CTE AS ( SELECT PQIM.PQPageID ,PQIM.PageURL as PageDescription ,CONVERT(Date,NCPI.RequestDateTime) AS [Date] ,DATEPART(HOUR,NCPI.RequestDateTIme) AS [HOUR] ,ISNULL (COUNT(NCPI.PQPageID),0)AS HourlyPQPageIdCount FROM dbo.NewCarPurchaseInquiries AS NCPI WITH (NOLOCK)
For ex: Pqpageid 1, at 8am on date 4-11-2015 has a count of 2359 and on 9-11-2015 at 8 am it has count 54then it should return date 9-11-2015 and count 54 for hour 8 like wise for every pageid and hour from 8 to 23 it should return the min count from given date of range.
We have some reports run quite slow because the queries are complicate and tables are large. So we create a materialized view in Oracle which store the result from the query and refresh it occationally. How to do this in MSSQL? I try the indexed view but seems it has lots of restrictions, our query has sub queries and cross database table joins so can't use the indexed view. Any other object or temp table can be used to cache the report data and be accessed globally by other procedures?
Dear Readers,Is it possible, like in Access, to link to tables in other SQL databases that are on the same server? I have a query that I originally had in Access that queered from multiply databases. It did this by having those other tables in the other databases linked to the database that had the query.
I just restored my SQL server 2000 database on the SQL server 2005. after this i ran the Service broker sample ("Hello World") on this database by changing the AdventureWorks name to the new database name. The "setup.sql" runs fine. When i run the "SendMessage.sql" i was not getting any rows in the output (The message was not getting inserted into the queue). I checked the Service broker is enabled on this databased using the query "select is_broker_enabled from sys.databases where name = 'newdbname' " It was 1. I even tried the ALTER DATABASE SET ENABLE_BROKER. but it didnt work.
When i tried the sample on a newly created database it worked fine.
Is there any solution to make the restored database to work for service broker.
I am a new fellow to replication.What is the problem I am getting is I am not able to create snapShot.Whenever I start Snapshot Agent it is going for half an hour and it will stuck in the case of one view saying like that 'failed to process bulk copy of data from dbo.syncobj_03x666*'.If I am publishing tables only then also I am getting same error.This view created by system.In my case publiher only working as a distributor.I created a SQL username which have access in subscriber & Publisher.Preferably I will be happy if someone suggest what and all are the primary criterias we have to keep in mind while doing Replication
Create jobs to copy database and restore database in destination servers
------------ Robert at 5/7/2002 11:00:30 AM
Yes and I would rather not use dts to accomplish this task.
------------ Ray Miao at 5/7/2002 10:02:15 AM
Do you have direct network connection to remote server? Did you try dts?
------------ Robert at 5/7/2002 9:08:06 AM
I've been trying to replicate a database to an off site server using snapshot replication. It is scheduled to run every hour but I've noticed when data is changed at the source it never gets replicated to the destination. Does anyone know why?? I can't use transactional replication beause not all the tables have primary keys and they can't be added due to code. Some tables have id colunms and have been created with the Not for Replication option on the subscriber. Any help will be appreciated.
I can set up snapshot replication for those tables without foreign key constraints. But if there are foreign keys in the table, there will be error message indicating that this object can not be dropped because it is referenced by ....
I've problem in replicating data thru SNAPSHOT as I've the tables with Foreign Keys at subscriber end. The Truncate table is not working because these tables were referenced by a FK. Even for recreating table during snapshot is also same problem. Any suggestions?
1) In snapshot replication, can the subsciber send info back to the publisher (even in a manual process)
2) In snapshot replication, do we need a distributor set up between the publisher and subscriber if there will only be a single subscriber, or can we write directly to it?
I am replicating all tables in the DB. the tables have PKs,FKs and identity columns. While truncating the data from the tables in subscriber getting the constraint errors. Please suggest how to get rid of this error.
I am replicating all tables in the DB. the tables have PKs,FKs and identity columns. While truncating the data from the tables in subscriber getting the constraint errors. Please suggest how to get rid of this error.
We have a production server in East Coast (SQL Server 2000 SP2 - Database size is around 30 Gig). We have a reporting server is the West Coast. We need to replicate (transactional replication every one hour) from East coast to West coast. Is there any way that I can take a backup and restore upto the last transaction backup and then start replication agent on the production (by saying schema and data already exist). Basically we don't wan't to snapshot using FTP or bcp through WAN because it is going to be very slow.
If this is possible, will there be any validation problem.