SQL Server Admin 2014 :: Unable To Open Designer / Field Value Required?
Jan 23, 2015
I'm using SQL accounting software now and i have a problem with my designer report. When i using designer report to design my customer statement of account, after i save the new design, i haven't rename it for the new statement report so the name there empty and i exit the designer report. So when i re-open the designer report, suddenly pop out "field value required". What should i do...? How can i re-open the designer report again?
I have a Windows NT group that is used to delegate certain database responsibilities to other members of staff and I am trying to grant permissions for the members of the group to be be able to establish database mirroring sessions, as in run the following:
ALTER DATABASE <database> SET PARTNER = 'tcp://principal_server.domain.com:port';
Although the group has db_owner role membership to the user database which grants the ALTER permission on the database, the following is being generated in the error log when they get to this step on the intended Mirror instance after restoring the database correctly in preperation:
SqlDumpExceptionHandler: Process 59 generated fatal exception c0000005 EXCEPTION_ACCESS_VIOLATION. SQL Server is terminating this process. * ******************************************************************************* * * BEGIN STACK DUMP: * 10/29/15 11:16:15 spid 59 * * * Exception Address = 00007FF9A6AF838C Module(sqlmin+000000000003838C) * Exception Code = c0000005 EXCEPTION_ACCESS_VIOLATION * Access Violation occurred reading address 00000000000000D8 * Input Buffer 210 bytes - * alter database <redacted> set partner = '<redacted>';
As you can see, the statement is denied to the user. There are no issues with the database as I am able to run the same query successfully using my own sysadmin account after the failed attempt. What other minimum permissions the group might need to successfully enable them to setup a mirroring session?
I am trying to setup querying Active directory from sql for the first time.
We are running on windows server 2012 and using sql 11.0.2100.60. Have tried the following
sql is on sever dev AD is on sever DO
EXEC sp_addlinkedserver 'ADSI', 'Active Directory Services 2.5', 'ADSDSOObject', 'adsdatasource' GO
[Code] ....
I get the following error when I try and query
Msg 7321, Level 16, State 2, Line 2 An error occurred while preparing the query "SELECT name FROM 'LDAP:// xxxx.internal' WHERE objectCategory='Person' AND objectClass = 'contact'" for execution against OLE DB provider "ADSDSOObject" for linked server "ADSI".
I recently installed standalone version of SQL 2014 Standard on my work computer. I used Access before but I want to use a SQL server instead.
We have a shared drive that a file gets deposited every day at midnight. I want to be able to get this file and import it to the server (its basically a list of names).
Here what I have done so far:
I created the database
Created the file and successfully imported data into it using the Import Data feature.
I saved the SSIS package
Scheduled an Agent Job for this package to run at certain time,daily
At first the jobs would fail with a Access is Denied. I added a user under Credentials with my network account ( have admin rights on the work computer).Also added a Proxy for the Credential user I made.
Jobs fail with a “Cannot open data file” error. I tried changing things here and there, but I can’t get it to work.
When trying to reinitialize a transactional replication subscription I am unable to select the "Generate the new snapshot now" checkbox. This seems to be happening only with SSMS 2014. When I connect to the same server from SSMS 2008 R2 I am able to select this checkbox.
The DB is in simple recovery mode. There are no open transactions (used dbcc opentran).
The server is running SQL Server 2014 and the DB is in compatibility mode SQL Server 2008 (100). It was upgraded to 2014 a month or two ago.
I have tried to re-size the log to 100mb, but any way I have tried (none gave errors), the log file remains the same size. I have tied to shrink the log file (through the UI and via DBCC commands) without success; no errors, but also no change in file size.
I have checked Log Reuse Waits, just in case, and as expected it showed “NOTHING” (select log_reuse_wait_desc, name from sys.databases)
I tried running a checkpoint, but that did not allow any resize or shrink to work.
I have tied creating large transactions to move the used point in the log file, in case this was the issue. I did this by creating tables that I drop after large inserts. While it shows me that the log space % used increased, the log file still does not allow the space to be reduced.
The following is what I was using for the transactions to get the log used.
BEGIN TRAN select a.* into testtable from sysobjects a, sysobjects b, sysobjects c ROLLBACK TRAN
Do I just need to continue running large transactions until the log space used gets high enough to get the “end point” in the log to really move? Is there an easier way to accomplish this (I have several DBs that have the almost identical problem), what I am using moves the Log Space Percent Used about a percent on each execution.
I am using a monitoring system where I can monitor a numeric SQL result assuming the result is one field and one row.I would like to do this to say monitor the free available space or percentage on say the Master database. DBCC SQLPERF gives me a few columns and results for all databases on the server.
I've recently started working with a public sector organisation who have 4 clustered sql instances that has 80% of it's db mirrored.
Looking at the transaction log - it seems that a transaction log backup is a good idea as the log is 4x larger than the data file.But I'm not allowed access to the physical server to check onto which drive I can create the trn. No RDP, no vmware - let's be honest I'm not even allowed to launch cmd line Also the Server Manager informs me "We will need to carefully look at database backups if you guys want to start doing these backups on box, as that will break our off box backup routine (it will screw the transaction chain)."
I don't understand how backing up the transaction log could break the "transaction chain"?
I am setting up SQL 2014 always on. I was able to set up the replicas between 2 servers in the same subnet.Their IP addresses are say like this:
100.20.200.200 100.20.200.201
When I am trying to introduce another node into the cluster which has IP address like 100.10.101.102, I am getting an error that the server isn't reachable.
I need to grab the value of the ReportFolder built-in global filed to a particular report on Report Manager, i.e., the full path to a particular report, when the user runs/executes the particular report, SSRS. Once I get this I need to process the path and get the Parent folder to the report and pass that to a stored procedure to do some business logic.
I want to set up a database role so that users can use sp_readerrorlog through SSMS. It does a check on membership in the securityadmin role.
I have tested it and can see you can grant execute on xp_readerrorlog but the SSMS GUI uses sp_readerrorlog.
I thought I could create a user/certificate and add the signature to sp_readerrorlog but it's not permitted (likely because it's not a normal database object).
So the other solution is to add the users to the securityadmin role but then explicitly deny alter any login (best done with a custom server role in 2012+ but otherwise just manually in 2008). I tested this out and it works, I'm not able to alter any logins or increase my own permissions, I also did a check of what's reported from fn_my_permissions(null, null) and it shows minimal permissions like I'd expect.
I'm making a copy of some tanles between 2 servers.
Server 1 requires a sql login
Server 2 is using Windows Auth.
I have a user on server 1 named "odbc" able to log in.
however my copy task fails, when I drill the error, it's lists the first user in server 1 alphabetically as the failed login???? but in my dts I am specifying the "odbc" user and password.
I think I have a permissions problem on server 1. So my Question, what minimum permissions does user "odbc" need to copy a table?
On server 1 I can copy from northwind to server 2 just fine..but any other db on server 1 causes the weird failure with the wrong username.
I am unable to update the data using record by record below scenario.
Required output:
patient will able to Admit/Re-admit multiple times in hospital, if a patient readmitted multiple times in hospital after the first visit, first visit record will get Re-admission=0 and Index=1. This visit should cal Index_Admission of that patient. using this index_admission should calculate the 30-day readmission.
Current Output:
Calculation: From index_admission discharge date to next admit_visit date,
1) if the diff is having less than 30 days, readmission=1 and Index=0
else readmission=0 and Index=1 should be update.
For checking this every time should check using the latest index_admission discharge_date.
To get this result i written below logic, but it's updating readmission=0 and Index=1 after 30-day post discharge of using first index admission.
UPDATE Readmission SET Index_AMI = (CASE WHEN DATEDIFF(DD, (SELECT Sub.Max_Index_Dis FROM (SELECT Patient_ID, MAX(Discharge_Date_Time) Max_Index_Dis FROM Readmission WHERE Index_AMI = 1 AND FPR.Patient_ID = Patient_ID GROUP BY Patient_ID) Sub) , FPR.Admit_Date_Time) between 0 and 31 THEN 0 ELSE 1 END),
Post installation of SQL Server 2014 Express edition, I am able to connect to the Database Instance.
But while opening a new query window in SSMS or opening a table getting the error:
Package 'RadLangSvc.Package, RadLangSvc, Version 12.0.0.0, Culture=Neutral, Public Token=89845dcd8080cc91' failed to load
Object reference not set to an instance of an object. (mscorlib)..Have already tried installing the componentsDACProjectSystemSetup_enu.msi, TSql LanguageService_enu.msi, DACFramework_enu.msi from path VS 2010 WCU DAC.
If I install an instance with Windows Only authentication, and then change it to Mixed Mode, if I enable the sa login, the password has already been set. What is the default? If it's generated, how secure is it? Is the password generated? What algorithm is used for that?
My sql databases in SQL Server 2014 has the status "suspend" as I saw in SQL Management Studio. I can't restore to serviceable condition sql databases through standard procedures. I need to restore .mdf file.
In our environment applications are using a DNS name which points to the physical server ip address. Now we are planning to move to 2014. We are planning to have servers in different subnets so we will be having two ip adresses for listener. How we can point the DNS to the listener ips? If failover happens can the DNS point to the exact ip address of the listener where it's primary node?
"Process 0:0:0 (0x1e10) Worker 0x00000006B6D341A0 appears to be non-yielding on Scheduler 13. Thread creation time: 12906028806348. Approx Thread CPU Used: kernel 0 ms, user 0 ms. Process Utilization 13%. System Idle 84%. Interval: 70189 ms."
Is it better to run the profiler or performan counter?
What are the filters we have to select in the profiler to monitor the Sql server
I have a SQL server box running 2014 reporting services. I have another server running IIS v8.
I would like to be able to connect to the IIS site and be given the SSRS report browser.
So externally if I browse to [URL], I am presented with the report server interface, the same as if I browse to http://xxx.xxx.xxx.xxx/reports internally.
What is the best approach for a read only copy of a database that is ~ 1TB. The primary database is fed nightly with an ETL process. We are currently trying to duplicate the ETL to read only server but that process is not going well. So we are looking at other options to let SQL make the copy.
The primary database is on a Win12R2 with SQL 12 or 14, a 2 node A/P failover cluster.
The read only copy will be on a Win12R2 with SQL 12 or 14. It is not a requirement to fail over to the read only copy if the primary should go down.
What would best the approach to accomplish the end result?
I have 10 databases which are configured as principal in mirroring I need to failover all the databases as part of failover , instead of writing query each database as parner failover, is an script which will generate the databases as principal to failover ?
After installing SSMS on some computers - the only way we can get SSMS to run correctly is to run it as the administrator. Is there a way where you don't have to do that? These end users are logging as themselves and have accounts in SQL Server all set up - but SSMS will only launch for them if we right click and select "run as administrator".After doing some digging - it seems that this is a common problem out there.
Have a SQL 2014 install and cannot for the life of me get the maintenance plan to remove old backups. I've tried everything. Rights to the folder where the backups are stored are adequate, extension set in the clean up task is as it should be, etc. Log shows the job ran successfully. Running the command manually shows successful completion, but backups are still not removed.
when execute the restore log command, in the messages window it shows how many seconds the restore takes, at the meantime, on the status bar, it also shows the seconds the command takes.
Two values are different and could be very different, please see below examples , restoring takes 1.8 seconds, but in total the command takes 4 seconds to complete, the other one is 8.1 seconds and 12 seconds.
What does SQL Server or Windows do after the restoring?
pic a:
pic b:
I did a xperf, I can see after the restoring is completed, sql server did garbage collect and log write, which just run very quickly, but storage is busy on reading the log file for nearly 2.2 seconds( 4-1.8), and 4 seconds ( 12-8.1) .
pic 1:
pic 2:
see pic 1 above, from 13 to 17, the restore operation is finished, but the storage jump to 100% active to do some reads, only reads no writes. zoom that period shows pic 2, it read 4096 (I don't know the unit size) for about 4 seconds, what does this do?
Data file, log file, backup file are no different drives, but all local drive, the interesting point is the read jumped after restoring, I tested it on different server, same result...