SQL Server Admin 2014 :: Maximum Size Of DB Hosted On Cloud Environment?
Feb 28, 2015what is the maximum size of a DB hosted on SQL 2012 cloud environment?
View 3 Replieswhat is the maximum size of a DB hosted on SQL 2012 cloud environment?
View 3 RepliesIs there a way using a stored procedure in a local database to add a record to a database executing in a cloud environment when both entities reside in different domains?
View 2 Replies View RelatedSQL server, by-mistake I updated values of a column in a database hosted online, is there any way undo the transaction. I didn't created any backup of the database. I read that still it can be recovered through the .ldf (log file) but unable to access it. Is there anyway to get access of the Log file or is there any way to recover the data.
View 1 Replies View RelatedI am asking about a virtual IP for SQL Server, is there a way we can assign a different IP to SQL Server other than the server's(host) IP address? like the same what we do in a clustered env.
View 3 Replies View RelatedWould it be possible to disjoin the SQL Server Clustered environment to a new domain without having to reinstall the cluster?
disjoin
e.g 2 node activeactive cluster with 4 named instances. SQLserver1.dn.za; SQLserver2.dn.za; SQLserver3.dn.za;SQLserver4.dn.za
servernode1.dn.za; servernode2
re-join them as SQLserver1.dn.ra; SQLserver2.dn.ra; SQLserver3.dn.ra;SQLserver4.dn.ra
servernode1.dn.ra; servernode2.dn.ra
What would be the impact on the servers, will they be able to resolve the new dns.?
I have SSIS 2012 Enterprise, using catalog deployment and have more that 50 environment variables for connection to databases across my enterprise.
The problem when i go to configure the packages after deployment and pick the proper env variables, that are not sorted, so i have to browse all entries in order to find the proper entry in environment variables.
I'm aware of the issues with sizing your logfile growth size too low (causing too many VLFs, etc). But I haven't seen much about the datafile side of it.
Are there any benchmarks specifically on setting datafile growth so low (on databases 1-100Gb in size)? Are there circumstances in well utilized servers where that might be warranted?
From BOL, I see these remarks with respect to the MODIFY FILE subcommand (my underline added):
Initializing Files
By default, data and log files are initialized by filling the files with zeros when you perform one of the following operations:
Create a database
Add files to an existing database
Increase the size of an existing file
Restore a database or filegroup
Which leads me to believe that expanding the size of a datafile will also wipe out (my definition of 'initialize') any existing data within that file.
I may be misunderstanding 'initialize', because when I tested it out, I found this wasn't the case - my table data written to the file was still there after a resize.
Need to clarify to what degree I'd be taking a risk by increasing the file size on a datafile which already has data in it.
While i execute dbcc sqlperf(logspace); I get following values.
Database NameLog Size (MB)Log Space Used (%)
master 16.17969 13.30275
tempdb 7.429688 61.7245
model 0.7421875 45.78947
msdb 5.554688 25.87904
distribution 2808.93 0.8172179
BANKDB 23438.87 48.20037
WSMIRSDB 109.7422 4.839111
For database BANKDB , Log Space used(%) is 48.83% and Log size is about 23438.87 where as my database size of BANKDB is 60 GB. FULL database and Log back is done every day night one time. My database is performing slow now.
Do we need to take log backup frequently like once a 1 hour so that Log space used will be less. Same query is taking more time to execute than before in same database is it because of log file has increased.
I do index organize and rebuild once a week and stats apply nightly.
Is it correct once log space size is increasing more than 10%. Do we need to take log backup?
I am having an issue in determining the correct size of a table.
I have a tableA in some DB on transaction server (Enterprise Edition), this table is being replicated in reporting server DB (Standard edition).
When I check the space used by this table in both the databases i see noticeable difference.
I am using EXEC sp_spaceused 'tableA' to determine the space.
Transaction Server
------------------------------------------------------------------------------
name rows reserveddata index_size unused
TableA1439999 695416 KB507048 KB182912 KB 5456 KB
Reporting Server
-------------------------------------------------------------------------------
name rows reserveddata index_size unused
TableA1439999 656904 KB483664 KB172680 KB 560 KB
So I wanted to know what could be the possible reasons for this difference ?
It is possible to find table size and in that table each row size.
View 4 Replies View RelatedI have 10 Gb index and disk space only left 5gb .
How can i rebuild index ?
I have web application and sql server 2k database in hosted environment. How can migrate my database to another server where I have SQL Server Express 2005.
View 1 Replies View RelatedI'm building a hosted website and I am using SQL 2005.
The DBA for the host has told me that i can not encrypt a symmetric key with a certificate, when using that symmetric key for encryption. As i read that this method provided optimum performance/ security for encrypting columns of data.
The DBA told me i can use a cert or a symmetric key for encryption.
I have searched for comparisons and found a blog entry by Laurentiu Cristofor comparing certs with asymmetric keys. Which leads me to believe that certs and asymm are very different than symmetric keys.
My question is which is the best choice in a hosted environment for column encryption, a cert or symmetric key.
Which is more secure? Does one offer a significant performance (dis)advantage?
TIA
I want to set up a database role so that users can use sp_readerrorlog through SSMS. It does a check on membership in the securityadmin role.
I have tested it and can see you can grant execute on xp_readerrorlog but the SSMS GUI uses sp_readerrorlog.
I thought I could create a user/certificate and add the signature to sp_readerrorlog but it's not permitted (likely because it's not a normal database object).
So the other solution is to add the users to the securityadmin role but then explicitly deny alter any login (best done with a custom server role in 2012+ but otherwise just manually in 2008). I tested this out and it works, I'm not able to alter any logins or increase my own permissions, I also did a check of what's reported from fn_my_permissions(null, null) and it shows minimal permissions like I'd expect.
I have a web app that uses two SQL Express databases. One of the databases is the membership database. I am deploying the app to a hosted environment. Here is my connection string:
<add name="Akamojo_APSNETDB" connectionString="Data Source=.SQLExpress; AttachDBFileName=|DataDirectory|Akamojo_ASPNETDB.mdf; Initial Catalog=Akamojo_ASPNETDB; Integrated Security=true; User Instance=true"/>
Here is the error that I get:
Could not open new database 'Akamojo_ASPNETDB'. CREATE DATABASE is aborted.
Cannot attach the file 'e:hostingmemberakamojoTournamentLogApp_DataAkamojo_ASPNETDB.mdf' as database 'Akamojo_ASPNETDB'.
File activation failure. The physical file name "c:inetpubwwwrootTournamentLogApp_DataAkamojo_ASPNETDB_log.LDF" may be incorrect.
The log cannot be rebuilt when the primary file is read-only.
The mdf file is not read only. In the hosted environment the database is on the E drive. In my dev environment the database is on the C drive. From the error message, it appears that when it tries to build the log file it is trying to build it on the C drive in a path that doesn't exist in the hosted environment.
Any help is greatly appreciated.
-Darrell
I have some code I build 2 weeks ago which I’ve been running daily but it’s suddenly stopped working with the following error.
“The table "tbl_Intraday_Tmp" has been created, but its maximum row size exceeds the allowed maximum of 8060 bytes. INSERT or UPDATE to this table will fail if the resulting row exceeds the size limit” When I google this there seems to be a related to tables with vast numbers of columns.
My table tbl_Intraday_tmp is relatively small. It has 7 columns. 1 of varchar(5), 3 of decimal(9,3) and 2 of decimal(18,0). The bit I’m puzzled with is it was working and stopped.
I don’t recall changing anything but I wouldn’t rule that out. I ‘ve inspected the source files and I don’t believe they have changed either.
DECLARE
@FileName varchar(50),
@Path varchar(50),
@SqlCmd varchar(1000)
= '',
@ASXCode varchar(5),
@Offset decimal(18,0),
[code]....
I am using MS SQL server 2008, and i have a table with 350 columns and when i m trying to create one more column its giving error with below message -
Warning: The table XXX has been created, but its maximum row size exceeds the allowed maximum of 8060 bytes.
INSERT or UPDATE to this table will fail if the resulting row exceeds the size limit.
how can i resolve this?
I couldn't insert more than 4000 charachters in "ntext" type in SQL server. Is there anyway, i can increase the size or any suggestions?
View 7 Replies View RelatedI inherited a lot of Servers to upgrade to 2014 to include an SSRS Server.
The encryption Key was never backed up and it seems that no one knows what the password is?
Do I have to manually load the reports? There are a lot of Reports.
[URL]
I want to use BCP to load data from a text file.
By default, constraints are turned off in bcp, so I use the CHECK_CONSTRAINTS hint.
bcp aborts if ANY of the rows contains a FK violation. No data get loaded.
So if I add the -b 1 batch size option, it loads all data UNTIL the first FK violation, but nothing after that.
I want to load EVERYTHING ... except for the violations. But bcp won't let me. Is there a way?
If I install an instance with Windows Only authentication, and then change it to Mixed Mode, if I enable the sa login, the password has already been set. What is the default? If it's generated, how secure is it? Is the password generated? What algorithm is used for that?
View 9 Replies View RelatedMy sql databases in SQL Server 2014 has the status "suspend" as I saw in SQL Management Studio. I can't restore to serviceable condition sql databases through standard procedures. I need to restore .mdf file.
View 9 Replies View RelatedI am using a monitoring system where I can monitor a numeric SQL result assuming the result is one field and one row.I would like to do this to say monitor the free available space or percentage on say the Master database. DBCC SQLPERF gives me a few columns and results for all databases on the server.
View 2 Replies View RelatedIn our environment applications are using a DNS name which points to the physical server ip address. Now we are planning to move to 2014. We are planning to have servers in different subnets so we will be having two ip adresses for listener. How we can point the DNS to the listener ips? If failover happens can the DNS point to the exact ip address of the listener where it's primary node?
View 1 Replies View RelatedIs there a way to schedule a sql job to run at different intervals
For eg:
The job should run at
7:00 Am
8:00 AM
and then at 10:00 Am
We installed the Sql 2014 in Test server.
We observed frequently
"Process 0:0:0 (0x1e10) Worker 0x00000006B6D341A0 appears to be non-yielding on Scheduler 13. Thread creation time: 12906028806348. Approx Thread CPU Used: kernel 0 ms, user 0 ms. Process Utilization 13%. System Idle 84%. Interval: 70189 ms."
Is it better to run the profiler or performan counter?
What are the filters we have to select in the profiler to monitor the Sql server
I have a SQL server box running 2014 reporting services. I have another server running IIS v8.
I would like to be able to connect to the IIS site and be given the SSRS report browser.
So externally if I browse to [URL], I am presented with the report server interface, the same as if I browse to http://xxx.xxx.xxx.xxx/reports internally.
What are my options?
What is the best approach for a read only copy of a database that is ~ 1TB. The primary database is fed nightly with an ETL process. We are currently trying to duplicate the ETL to read only server but that process is not going well. So we are looking at other options to let SQL make the copy.
The primary database is on a Win12R2 with SQL 12 or 14, a 2 node A/P failover cluster.
The read only copy will be on a Win12R2 with SQL 12 or 14. It is not a requirement to fail over to the read only copy if the primary should go down.
What would best the approach to accomplish the end result?
I did not worked on AVG in sql 2012, what are the possible errors and how to resolve in always on in sql 2012 .
View 1 Replies View RelatedI have 10 databases which are configured as principal in mirroring I need to failover all the databases as part of failover , instead of writing query each database as parner failover, is an script which will generate the databases as principal to failover ?
View 3 Replies View RelatedHow to check the delay in between replicas in AG?
View 3 Replies View Relatedmy 4 GB .OST file is corrupted and i am not able to access my mails and contacts.
View 2 Replies View Related