SQL 2012 :: Does Not Load Smaller Splash Screen
May 29, 2014
However when I start SQL 2012 it loads the Management Studio but does not load the smaller splash screen that usally appears asking me to connect to a server. When I try to click any of the menu items at the top of the screen the system just hangs.
I also have 2012 Service pack 1 installed too.My installations of 2005 and 2008r2 still work fine.I also tried loading SQL2014 and had the same issues as with 2012.
View 0 Replies
ADVERTISEMENT
Dec 27, 2006
everything is ok.. the connection is on..the page can be loaded... no error....
BUT the data from the database cannot be retrieved... as in.. it just didnt load on screen...
i dont know whats wrong... i have follow all the steps show on a book.. can anyone pls help me?? thks
View 3 Replies
View Related
Jul 20, 2005
Hi all:I restored one backup database (7.9 GB mdf) on two diffrent servers. Ishrunk them by clicking "Move pages to beginning of file beforeshrinking".After shrinking, one mdf file is 6.7 GB, and the other is 4.2GB. Ishunk again and again:1. the 6.7GB become 5.9GB, 5.2GB, 4.7GB and 4.2Gb (four times)2. the 4.2Gb become 4.0GB (just one more time)It is wired, I am wondering the mdf will be smaller and smaller if Icontinue to shrink them? What is the reason?ThanksWJ
View 1 Replies
View Related
Oct 25, 2007
get the following error when I click on prview.
Build complete -- 0 errors, 0 warnings
[rsWarningFetchingExternalImages] Images with external URL references will not display if the report is published to a report server without an UnattendedExecutionAccount or the target image(s) are not enabled for anonymous access.
[rsInvalidMIMEType] The value of the MIMEType property for the image €˜image1€™ is €œtext/html; charset=utf-8€?, which is not a valid MIMEType.
Preview complete -- 0 errors, 2 warnings
This is the URL:
http://air101/airmaps/amsexpress.aspx?sym=bigdot&mlat=42.2446&mlon=-71.1649&lat=42.2446&lon=-71.1649&wid=0.012500&ht=0.012500&mpanv=0&mpanh=0
I've been told that image is a gif how should I set the mimeType?Or is this a security issue of some sort?
View 5 Replies
View Related
Jun 6, 2014
I required query for Incremental Load of CDC using Query
View 1 Replies
View Related
Mar 11, 2015
I am after T-SQL code which will simply load the next T-log backup file from a network share folder to a warm standby db on a secondary server.What is needed is a Third server (server x), to participate in log shipping (MULTIPLE TARGETS).
Primary SERVER (SERVER A)
Secondary SERVER (SERVER B) Log shipped to via GUI.
THIRD SERVER (SERVER X) which will contain the same log shipped db from server A.
This will simply restore the logs from a network share to keep the db up to date.
View 3 Replies
View Related
Jun 13, 2014
Is there such a thing called 'Load balancing' on fail over cluster?
View 2 Replies
View Related
Mar 24, 2015
We have a file import job. This job typically imports millions of records into a SQL2008 DB. After the load the DB performance goes down the drain. Thus far, their solution has been to rebuild indexes on effected tables. I'd like to come up with a better solution. My guess is that after the load, the statistics are shot until the next stats update.
What is the best way to handle this scenario? There must be some way to keep the stats current during a big data load.
View 3 Replies
View Related
Sep 22, 2015
I'm trying to improve the loading of some tables with large amounts of data that forms part of an ETL. I was going to try removing any indexes before the inserting to speed up the process, but I had some questions on whether or not I should include the clustered index (assuming one exists).
I was originally planning on including a step to disable all indexes on the destination table using the following:
ALTER INDEX ALL ON MyTable DISABLE
Once the load had finished I'd simply rebuild all the indexes.
should I simply disable the non-clustered indexes?
View 9 Replies
View Related
Nov 4, 2015
I have a VB.NET scheduled job as a task scheduler ( windows 2012) that calls a stored procedure that bulk inserts. I have added the user in the server role "bulkadmin" yet I get the "You do not have permission to use the bulk load statement error"
System.Data.SqlClient.SqlException (0x80131904): You do not have permission to use the bulk load statement.
at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)
at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream,
[code]....
View 0 Replies
View Related
Dec 22, 2013
We need to implement incremental load in database. A sample scenario is, there is a view (INCOMEVW) which is build on top of a query like
CREATE VIEW INCOMEVW
AS
SELECT CLIENTID,COUNTRYNAME,SUM(OUTPUT.INCOME) AS INCOME
(SELECT EOCLIENT_ID AS CLIENTID,EOCOUNTRYNAME AS COUNTRYNAME,EOINCOME AS INCOME FROM EOCLIENT C INNER JOIN EOCOUNTRY CT ON
C.COUNTRYCODE=CT.COUNTRYCODE
[code]...
This is a sample view. As of now there is a full load happening from the source(select * from INCOMEVW) and loads to target table tbl_Income.We need to pick only the delta and load to the target table using a staging. The challenge is,
1) If we get the delta(Insert,update or deleted rows in the source tables EOCLIENT,EOCOUNTRY,ENCLIENT,ENCOUNTRY, how to load the incremental to
single target table tbl_Income.
2) How to do the Sum operation with group by in incremental load?
3) We are planning to have a daily incremental load and thinking to create the same table structure as source with Date and Flag column to identify
the date and whether that source row is an Insert or Update or Delete with the flag. But not sure how to frame something like this view and load to single target with Sum operations.
View 1 Replies
View Related
Apr 10, 2014
I have files which has date in file name and I want to load all files in sequential order.
Here I am giving you an example of my file name.
"NetworkActivity_869_403722_01-01-2014.log"
"NetworkActivity_869_403722_01-02-2014.log"
"NetworkActivity_869_403722_01-03-2014.log"
"NetworkActivity_869_403722_01-04-2014.log"
"NetworkActivity_869_403722_01-05-2014.log"
These are my files in that I want to load files sequentially means jan 1st, 2nd like this way.
View 5 Replies
View Related
Aug 12, 2014
I want to load historical data from an old system into a new one.Thing is, that old system stored dates as Datetime and the new one uses DateTimeOffset.
All data was collected in the same Time Zone... but with the Daylight Saving Time (DST)
The offset is either +04:00 or +05:00, based on the calendar date. To add to the complexity, the rules for DST changed a couple of years ago.
To determine the offset, I'd need to know what was or would have been the server Timezone for each historical date.
View 1 Replies
View Related
Dec 8, 2014
I need to load images from folders into a SQL Server table and I have done it successfully for individual images, however I need to load all the names of the folders and sub folders names in separate columns + load all images.
So the folders look like as in the screenshot and the final result of the table in SQL Server should look like the second screenshot.
View 9 Replies
View Related
Jan 27, 2015
For Bulk Load requests in SQL server, Are there any specific profiler event? Like the one we have for RPC RPC:Starting and for Batch Requests, we have SQL:BatchStarting.
Are Bulk Load requests that are being monitored through Profiler captured as SQL:Batch... events at the backend?
Are there any new features added in 2012 or 2014 to identify a Bulk request submitted through bcp.exe utility or any other sqlbulkcopy program?
View 1 Replies
View Related
Mar 10, 2015
See pic
Syntax on using BCP.
Here is my Requirements: I have a file that has a bunch of INSERT STATEMENTS. So the stuff inside the file looks like the following:
File has about 5000 rows.
INSERT INTO abc ( name ) VALUES ( 'Peter' );
INSERT INTO abc ( name ) VALUES ( 'Bob' );
View 2 Replies
View Related
May 18, 2015
How to load files with similar format , from two different locations into same database with same ssis.
Lets say
Location 1: C:LoadFilesCust1APP_123445.txt
Location 2: D:LoadFilescust2VDD_543121.txt
Currently we have one ssis which loads and process files from C:LoadFilesCust1 only. we have to modify the existing package it to load files from Location 2 (D:LoadFilescust2) as well. Also while loading, the ssis should assign a value to existing column CustID depending upon the file name. File names always start with APP_ in first location. VDD_ in second location
Assign CUSTID as 100 if file name starts with APP_
Assign CUSTID as 200if file name starts with VDD_
View 1 Replies
View Related
Aug 3, 2015
I have some my below requirment to loading some last year and currnet year records for some ID's in my table,
We have to load the ID's that are active at the end of the year for the prior year and ID's that are active as of today for the current year.Here is the scenario when the ID is currently terminated but active at the end of the prior year and the record is not in the table.so, we didn’t load the count for the prior year
Here prior year is 2015-2015 and Current year is 2015-2016
CREATE TABLE remp_year
(ID INT,
STATUS NVARCHAR(100) NULL,
START_DATE DATE NULL,
END_DATE DATE NULL,
date_year nvarchar(10) NULL)INSERT INTO remp_year VALUES (10,'Active','2015-05-26','2015-12-31','2015-2016');
[Code] ...
Here ID 20 and 50 for terminated records is the prior year records so it should count for the last year and those are active in this year those will count for this year.
View 3 Replies
View Related
May 28, 2015
I'm attempting to load some data into an explicit hierarchy in MDS 2012 via the staging table and struggling with the HierarchyName field. Specifically I'm loading data into stg.[Entity Name]_Consolidated and using the exact name of the explicit hierarchy I've set up in the front end web application.
Originally my hierarchy was labelled "Reporting Hierarchy" and when loading the data into staging using this name then running the batch from the Import Data screen I can see the error message "Error - The HierarchyName is missing or is not valid.". I've checked the table mdm.tblHierarchy and can see that the name there is exactly as it was in the staging table and have since renamed the hierarchy as "Reporting_Hierarchy" with the same results.
View 0 Replies
View Related
Sep 8, 2015
An inherited SSIS (2012) solution displays the following messages when I try opening it in VS 2012:
"One or more projects in the solution were not loaded correctly."
Then another message displays
"The encrypted data in the project manifest failed to load. The project manifest is corrupted or the project was encrypted by another user."
View 8 Replies
View Related
Jan 2, 2014
We are designing a Staging layer to handle incremental load. I want to start with a simple scenario to design the staging.
In the source database There are two tables ex, tbl_Department, tbl_Employee. Both this table is loading a single table at destination database ex, tbl_EmployeRecord.
The query which is loading tbl_EmployeRecord is, SELECT EMPID,EMPNAME,DEPTNAME FROM tbl_Department D INNER JOIN tbl_Employee E ON D.DEPARTMENTID=E.DEPARTMENTID.
Now, we need to identify incremental load in tbl_Department, tbl_Employee and store it in staging and load only the incremental load to the destination.
The columns of the tables are,
tbl_Department : DEPARTMENTID,DEPTNAME
tbl_Employee : EMPID,EMPNAME,DEPARTMENTID
tbl_EmployeRecord : EMPID,EMPNAME,DEPTNAME
How to design the staging for this to handle Insert, Update and Delete.
View 9 Replies
View Related
May 15, 2014
Is there a switch I can use to force a bulk insert and if data is truncated, I'm good with that. The truncated data, in this case, is not data I can use anyway if it is long enough to be truncated.
I need to keep the field at VARCHAR(23) and if I expand it, I won't be able to join on it after the file load completes. I'd like the data to be inserted (truncated if need be) and then I'll deal with the records that are truncated after I load the file.
View 5 Replies
View Related
Nov 24, 2014
How to write Stored Procedures to load the Data Model from OLTP to DWH ?
View 9 Replies
View Related
Mar 6, 2015
I have stored procedure .In SP i am using cursur to load data from Parent to several child table.
I have attached the script with this message.
And my problem is how to use direct select and insert or load to speedup the process instead of cursor.
USE [IconicMarketing]
GO
/****** Object: StoredProcedure [dbo].[SP_DMS_INVENTORY] Script Date: 3/6/2015 3:34:03 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
[Code] ....
View 3 Replies
View Related
Aug 16, 2013
SQL 2012 - Convert BIDS project and DTUTIL cannot load the package I just converted a really simple 2008 BIDS project to 2012 and 2012 dtutil will not load it. Tried 32 bit and 64 bit dtutil. Â
Get the following error message.
'count not load package "c: empSSISPackage.dtsx" because of error 0x80131534. Description: the package failed to load due to error 0x80131534 "<null>". This occurs when CPackage::LoadfromXML fails.
I can Import the package in SQL Manager, and I can deploy it using the manifest and the deployment wizard. But DTUTIL chokes on it. It is the dtsx right from 2012 Bids build.
dtutil /DestS ficertx2x /FILE C:empRelease2012CreateSSISPackage.dtsx /COPY SQL;/MYTestPackage /QUIET
View 9 Replies
View Related
Mar 27, 2000
I have a situation where I need to migrate data from an older platform to a newer one. The data from the old system(s) will be available on DAT tapes. All database construction on the new system will be identical to the old one in size and schema, except for one table (call it "ARCHIVE").
If the ARCHIVE table on the old system is 210MB, and the ARCHIVE table on the new system has the same attributes but has been expanded to 380MB in size, can I simply restore the dump for the old table into the new ARCHIVE?
Empirically it works (I have done it with apparent success two times) but I seem to recall that backups are done by pages, and I'm concerned that there may be conditions not being met by simply doing the restore the way I'm planning to do it.
Also, are there any tests or checks built into SQL which I can use to check table integrity on the target ARCHIVE table after the restore?
Any help is greatly appreciated.
Best rgds,
Kevin
View 4 Replies
View Related
Aug 28, 2000
I have a database in 2 GB .mdf and a 1 GB ldf. The backup is much
smaller. I need to copy this database to another server which does
not have that much free space. Can this be restored to a smaller
.mdf and .ldf? How?
Thanks.
Ranjit
View 1 Replies
View Related
Dec 9, 2005
Is it generally or almost always better to have multiplesmall SPs and functions to return a result set instead ofusing a single big 1000+ lines SP?I have one SP for example that is 1000+ lines and earlyanalysis of the SP I see it first has 3 big blocks of codeseparated by IF statements. Then within each IF blockof code I see 3-4 UNIONs. UNIONs that meansthey are all returning the same columns so I amguessing these are prime candidates for becomingindividual functions or SPs, maybe even dynamic SPs.Obviously I am not showing you the code but am Iright to think this way? This same SP has about 15 JOINsincluding some LEFT JOINs and one LEFT JOIN to a (SELECTstatement) and almost all the tables referenced by theseJOINs have thousands of records, very possibly hundreds ofthousands.The SELECT statement is returning 30-40 columns froma lot of the these tables plus I also see a lot of CASE ELSEstatements within the main SELECT statement. The code ofeach CASE statement is calling a function. As an exampleif the CASE is for EmployeeID then a function is being calledto get the EmployeeID's FirstName and LastName. If the CASEis for CustomerID then another function is being called to getthe Customer Name.I am thinking to cut this big SP to many smaller SPs and/or functionsand I also plan on using table variable(s) to hold temporary resultwhile I continue processing the records from the table variablewith other code logic.Also I want to leave as the last thing to do is to convert the"machine result", i.e. EmployeeID or CustomerID to "humanreadable result", i.e. Employee FirstName and LastName,Customer Name.I am trying to test this on the Northwind's Employees table,but the Statistics IO, Time and the Execution Plan aresomething I've only started to use. I am unable to makeconclusion which method is better. I'll work on posting anotherpost specifically with details to this test that I am currently doing.My opinion is that by having 1 single SP with 15+ join causea lot more locking than if I would run smaller SPs and store theresult into temp table variables and continue processing theremaining code logic.I would like to know what you think and if I am right or wrongon how I want to optimize this SP?Thank you
View 5 Replies
View Related
Sep 4, 2007
Hello there....
I have a scenario where I am trying to set up multiple database instances for multiple test/development environment(s) for each group where the test/dev environment will contain a copy of what was in the production environment.. The test/dev environment can be refreshed on demand based on the prior night's full backup of the production environment.. This is good for our web developers and for training purposes, as the test environment(s) can be played around with, and will retain data for as long as the developers/testers/trainers need it, and then can be refreshed to the most current data when everyone in the group decides they want it refreshed...
Normally, this works out well...
However I am having a file size issue...
The production database was pre-allocated (a long time ago) to a large file size (probably to reduce external fragmentation).... So even though the backup file is only 5GB, the production database file itself is something like 40GB... I believe the production database has a maintainance plan on it already that rebuilds the indicies each weekend, etc...
Anyhow, the problem is that when I restore the 5GB database back into a newly created database file, the file expands all the way up to 40GB again, even though the backup file is 5GB...
Normally this would be fine, the problem is that I am trying to create multiple environments, and I do not have the disk space on my test/dev server for 40GB (plus another 15GB or so for the transaction log) multiplied by each of my test/dev environments... It would be much nicer if I could get this down to 5GB (or heck, even 10GB), since I know for sure that the total amount of data in the database doesnt exceed 5GB, and I have plenty of space on my disk for 5 (or 10) GB multiplied by each of the environments I want to create...
I have tried DBCC SHRINKDB and I have tried DBCC SHRINKFILE with the truncate after the restore, which seems to work but doesn't....
I have also tried to go into the database properties and change the "initial size" but that doesnt do anything etiher
Is there any way to get this file back down to a manageable size after the restore??
Or better yet, is there a special method to restore the database so it wont 'expand' back out to 40GB in the first place??? Perhaps some option to tell the restore process that even though the source database had a 40GB pre-allocation, that the database I am restoring into doesn't need to be pre-allocated??
View 1 Replies
View Related
Jul 19, 2007
Alright. I'm stuck. I admit it!
I have a bunch of names, and each name can have one or more 'roles'(operator, reader, key operator, etc. Just random words really.) attached to it.
Using reporting services, I've managed to get the information I need with relative ease... the only problem is, with 900 some records to display, it's current length of 41 pages with just one column going down the left side of each page is not exactly preferred by my superior (can't say I blame him really. Looks kind of odd!)
It looks like this right now:
Name1
Function
Function
Function
Name2
Function
Function
Name3
Function
Function
etc all the way down to page 41
I need it to look something like this:
Name 1 Name 4 Name 7
Function Function Function
Name 2 Name 5 Function
Function Function Name 8
Function Function Function
Name 3 Name 6 Function
Function Function Function
etc. Or some variation of...
I've fiddled around, and merely adding one extra column to the initial table-layout with the same =(!UserName etc) just merely replicates the data in the second column... not giving me the new stuff.
I'm quite new to reporting services, but none of the tutorials I've seen/done seem to accomodate for this... Heeelp!
View 3 Replies
View Related
Apr 3, 2015
I am unable to load data from flat file to sql table using bulk insert sql statement
My code:-
DECLARE @filePath VARCHAR(200)
DECLARE @sql VARCHAR(8000)
Declare @filename varchar(100)
set @filename='CCNVZ_150401054418'
SET @filePath = 'I:IncomingFiles'+@FileName+'.txt'
[Code] .....
View 1 Replies
View Related
Oct 24, 2005
I'm having problems with handling a very large amount of user records - about 100.000 - 150.000 records. Instead of selecting all of them at a time, how do I f.ex. select 1000 of them? (f.ex. get nr. 1 - nr 1000, then get nr. 1001 - nr. 2000) ???
View 1 Replies
View Related
Apr 16, 2002
Hi all,
I found a database file and a log file over 2G on mssql 2000 server. Actually, they only need around 200M. I try to backup, truncate the database in order make the size smaller. But the size cannot be smaller. How can I do it?
Simon
View 3 Replies
View Related