DB Design :: Don't Have Enough RAM Slots Or Redundant Power Supplies
Aug 18, 2015
I'm looking for clarification around how SQL 2014 would get licensed if a server only has 1 of 2 CPU sockets in use (second socket being empty). I know the new license model is Core based, not Socket based. So does this mean that if I buy a "4 core pack" to cover my first CPU (quad core CPU), I am compliant with the license model? Or does Microsoft want me to license an empty socket with a Core Pack too? Its hard to find a rack mount server that only has 1 CPU socket. And the ones I do find don't have enough RAM slots or redundant power supplies.
While a stored procedure was running DBCCs against a specific database, the following error was encountered. 'WARNING: No read-ahead slots available...' The SQL Server service hung upon encountering this error and would not respond and the box had to be rebooted. I re-configured the 'RA worker thread' setting from 3 to 8 as a preliminary action. I need you to find out if this setting is 'auto-managed' within SQL Server 7.0, or whether this is something we will need to watch continuously (we had another instance of this failure about 1.5 months ago). Also what else can be done to avoid this problem in the future.
I am trying to show images in a product listing in power view.I work with an excel 2013 desktop version based on an office 365 pro account.I did the following steps:
import of an excel file with an article list via power query and loading the data to the data model import jpg images from a folder via power query, setting content as binary type and loading the data to the data modellinking both tables in power pivot--> manage via the image namesetting the table behavior for the images table under power pivot --> manage --> Advanced (e.g. Default Image: Content)opening power view and building article cards with article number and imageProblem: only a camera icon shows up in power view
Is there a solution with a desktop version?Can I use my Office 365 Pro account to make it work? How?Why is there no solution showing images in a pivot table?Link to Dropbox with power pivot files
set ANSI_NULLS ONset QUOTED_IDENTIFIER ONgo-- =============================================-- =============================================ALTER PROCEDURE [dbo].[Product_FindByParameters](@Name Varchar(255),@ManufactureID bigint,@ShortDescription Varchar(255),@ManufactureProductID Varchar(255),@ItemsInStock bigint,@StorePartNumber Varchar(255)) ASBEGIN SELECT P.ProductId,P.StorePartNumber,P.ShortDescription,P.ManufactureProductID,P.Name,P.Price,P.ItemsInStock,M.ManufactureName FROM Product P left join Manufacture MON P.ManufactureID=M.ManufactureIDWHERE ( P.Name like '%' + @Name + '%' OR @Name is null)AND (P.ShortDescription LIKE '%' + @ShortDescription + '%' OR @ShortDescription is null)AND( P.ManufactureProductID LIKE '%' + @ManufactureProductID + '%' OR @ManufactureProductID is null)AND (P.ItemsInStock=@ItemsInStock)AND (P.ManufactureID = @ManufactureID OR @ManufactureID is null)END--exec [dbo].[Product_FindByParameters] 'Heavy-Duty ',7,'Compact Size','DC727KA' ,0,''--exec [dbo].[Product_FindByParameters] 'Heavy',7,'','','',''--exec [dbo].[Product_FindByParameters] 'Heavy','' ,'','','' ,'' First 2 exec statement gives many data row as result,But why the last donot give any row ;( ;(how can i rewrite the stored procedure, such that it gives out put even if i don't supply ManufactureID as inputkindly help me
With merge/insert statements ...Is DISTINCT best way to handle problem of source table containing duplicate rows, along with WHERE NOT IN statement? the source dataset is large and having to do DISTINCT and further filtering is taxing on the ETL.
I want to configure a redundant SQL server. Let's said if server A is down, then server B can take over the workload of server A, and this is transparent to users which means they won't notify server A is down.
Besides the failover clustering method, is there any other solution?
My requirement is needed to run in Microsoft SQL 2000 standard edition and Microsoft Windows 2000 standard edition
The next script, gets redundant indexes, in a given database. I run it in the query Analyzer, one statement at a time.
PLEASE: review the output, before drop any index.
USE ....
-- step 1 -- gets an tab,idx,col,order view create view listaidxcols as select SO.name as tabname, SI.name as idxname, IK.keyno as keyno, SC.name as colname from sysindexkeys IK, syscolumns SC, sysindexes SI, sysobjects SO where -- Link syscolumns IK.id=SC.id and IK.colid=SC.colid -- Link sysindexes and IK.id=SI.id and IK.indid=SI.indid -- Link sysObjects (tables) and IK.id=SO.id and SO.xtype='U' -- no internal indexes and SI.name not like '_WA_Sys_%' and SI.name not like 'hind_%'
--step 2: view to get # of columns per index create view cantcolsidx as select tabname, idxname, count(*) as numllaves from listaidxcols group by tabname,idxname
-- step 3 -- the redundant index list select A.tabname as tabla,A.idxname as Aidx, B.idxname as Bidx from cantcolsidx A, cantcolsidx B where A.tabname = B.tabname and A.numllaves < B.numllaves and A.idxname <> B.idxname and A.numllaves in ( select count(*) from listaidxcols C, listaidxcols D where C.tabname=A.tabname and C.idxname=A.idxname and D.tabname=B.tabname and D.idxname=B.idxname and C.idxname<>D.idxname and C.colname=D.colname and C.keyno =D.keyno )
--clean up drop view listaidxcols; drop view cantcolsidx;
edit: this came out longer than I thought, any comments about anythinghere is greatly appreciated. thank you for readingMy system stores millions of records, each with fields like firstname,lastname, email address, city, state, zip, along with any number of userdefined fields. The application allows users to define message templateswith variables. They can then select a template, and for each variablein the template, type in a value or select a field.The system allows you to query for messages you've sent by specifyingcriteria for the variables (not the fields).This requirement has made it difficult to normalize my datamodel at allfor speed. What I have is this:[fieldindex]id int PKname nvarchartype datatype[recordindex]id int PK....[recordvalues]recordid int PKfieldid int PKvalue nvarcharwhenever messages are sent, I store which fields were mapped to whatvariables for that deployment. So the query with a variable criterialooks like this:select coalesce(vm.value, rv.value)from sentmessages sminner join variablemapping vm on vm.deploymentid=sm.deploymentidleft outer join recordvalues rv onrv.recordid=sm.recordid and rv.fieldid=vm.fieldidwhere coalesce(vm.value, rv.value) ....this model works pretty well for searching messages with variablecriteria and looking up variable values for a particular message. thebig problem I have is that the recordvalues table is HUGE, 1 millionrecords with 50 fields each = 50 million recordvalues rows. The value,two int columns plus the two indexes I have on the table make it into abeast. Importing data takes forever. Querying the records (with a fieldcriteria) also takes longer than it should.makes sense, the performance was largely IO bound.I decided to try and cut into that IO. looking at a recordvalues tablewith over 100 million rows in it, there were only about 3 million uniquevalues. so I split the recordvalues table into two tables:[recordvalues]recordid int PKfieldid int PKvalueid int[valueindex]id int PKvalue nvarchar (unique)now, valueindex holds 3 million unique values and recordvaluesreferences them by id. to my suprise this shaved only 500mb off a 4gbdatabase!importing didn't get any faster either, although it's no longer IO boundit appears the cpu as the new bottleneck outweighed the IO bottleneck.this is probably because I haven't optimized the queries for the newtables (was hoping it wouldn't be so hard w/o the IO problem).is there a better way to accomplish what I'm trying to do? (eliminatethe redundant data).. does SQL have built-in constructs to do stuff likethis? It seems like maybe I'm trying to duplicate functionality at ahigh level that may already exist at a lower level.IO is becoming a serious bottleneck.the million record 50 field csv file is only 500mb. I would've thoughtthat after eliminating all the redundant first name, city, last name,etc it would be less data and not 8x more!-GordonPosted Via Usenet.com Premium Usenet Newsgroup Services----------------------------------------------------------** SPEED ** RETENTION ** COMPLETION ** ANONYMITY **----------------------------------------------------------http://www.usenet.com
I have a situation where I need to identify redundant rows within a table. Here is the schema of the table:
create table Temp.Response (
TempKey int identity(1,1) not null primary key clustered, ResponseId char(27) not null, StudentUin char(9) not null, TemplateId char(27) not null, MidEndFlag char(3) not null )
Here is a sample dataset that represents the production data:
I need to identify the ResponseId values for rows that contain redundant StudentUin/TemplateId/MidEndFlag values, so that I can delete those rows. ResponseId, while not the primary key, is a unique value in this dataset. I thought I might use a cursor to parse this, but the real dataset is exceedingly large, and would like a set-based solution.
I need info from 2 Tables. from the Table 2 I just need 1 column. When i ask for this column the output I get is data repeating themselve many times.
Distinct, should give me unique data, but is doesnt.... the code:
SELECT DISTINCT FSenddate, FSupplyIDName, FSupplyerNumber,FBillNo,FSourceBillNo,FItemName,FItemModel, FAuxQty,FAuxTaxPrice,FHeadSelfP0237 FROM vwICBill_26 WHERE FSenddate BETWEEN DATEADD(dd,-14,GETDATE()) AND GETDATE()
This code just works in Table1 (vwICBill_26)
but with table 2 (vwICBill_1)
SELECT DISTINCT vwICBill_26.FSenddate,vwICBill_26.FSupplyIDName, vwICBill_26.FSupplyerNumber,vwICBill_26.FBillNo, vwICBill_26.FSourceBillNo,vwICBill_26.FItemName, vwICBill_26.FItemModel,vwICBill_26.FAuxQty, vwICBill_26.FAuxTaxPrice,vwICBill_26.FHeadSelfP0237, vwICBill_1.FDate,vwICBill_1.FContractBillNo FROM vwICBill_26,vwICBill_1 WHERE vwICBill_26.FSenddate BETWEEN DATEADD(dd,-14,GETDATE()) AND GETDATE() AND vwICBill_1.FContractBillNo=vwICBill_26.FSourceBillNo
The last sentence is the problem I want that it shows me the data that is not equal. As soon as I implement the not equal it shows me the massive repeating data. I mean even without the last sentence I get this data output.
All together, I want a clear database output without data repeating. Any ideas how it may work without DISTINCT?
I think this problem is a typical amateure problem, but I would apreciate help!
I have several reports in a Power View Gallery. In Gallery view, most of the reports show the "Open New Excel Workbook", the "Create Power View Report", and the "Manage Data Refresh" buttons on the right side of the report list. Why would some reports not have these buttons available? In the attached image you can see one report with the buttons and one without the buttons.
I've imported a number of excel sheets into a Power Query Table. All seems to appear ok until I load the data. Of the 15k rows around 2k have a similar error where it cannot convert an integer to type string as below example
Expression.Error: We cannot convert the value 40 to type Text. Details: Value=40 Type=Type
The columns in question are all of integer type, I've looked through the M query and there is no conversion to string taking placeThe values where we don't get the error are also integers hence the intriguing question is why does the error occur on a subset and not the others. I suspect there is a limit to the number of errors also somewhere internally M query is converting the column to text for some reason.
Slow loading issue with an if statement. In the raw data the field [Location] is a text field e.g. 0010. I have a parameterised query that get a Location_Value from Excel and passes it to the PQ query using:
#"Filtered Rows1" = Table.SelectRows(#"Removed Other Columns", each ([SalesMode] = 0) and ([SalesType] = 0) and ([Location] = Location_Value))
This works fine if you chose a single location. However I wanted to be able to select all locations and text is horrible to work with so in PQ I used the change type function to change the location column into whole numbers. I changed excel to also pass a number as Location_Value. I was therefore surprised when the same query took 2.5 times longer to refresh????
My PQ now looks like this
#"Changed Type" = Table.TransformColumnTypes(#"Removed Other Columns",{{"Location", Int64.Type}}), #"Filtered Rows1" = Table.SelectRows(#"Changed Type", each ([SalesMode] = 0) and ([SalesType] = 0) and ([Location] = Location_Value))
I'm wondering if I need to do something to the ([Location] = Location_Value) bit as maybe it still thinks [Loation] is text and it is trying to compare it to a number. I st assumed the step above meant that [Location] was now a number, but maybe you still have wrap it with some kind of VALUES or TEXT function?
Country State Rank India Kerala 1 India Kerala 2 India Kerala 3 India Tamil Nadu 1 India Tamil Nadu 2 India Orissa 1 India Orissa 2 US Florida 1 US Florida 2 US NewYork 1
I have to generate rank like this in power pivot. How can I achieve it?
I have opened an account in [URL] and taken the 60 days trial for power bi pro. I've developed power pivot and generated power views in share point 2013. But, I'm new to Power BI desktop. I have created a report in power BI desktop and published to [URL]. Also, I have uploaded an excel file directly to [URL] and created the report from the workspace available there itself and pinned the report to dashboard also. Everything is fine till this. But, I need to refresh the file which I have uploaded. I have some dummy data in excel sheet.
ZipCode State ZipName
2345 AA AA 456 BB BB 6787 CC CC
This has been created as a table and then added to data model. So, power pivot has been created for the same. Then I have uploaded this file to [URL] site. But, I'm getting an error message while trying to schedule refresh for the same.
"You cannot schedule refresh for this dataset because it does not contain data model connections. You cannot schedule refresh on worksheet connections or linked tables. To schedule refresh the data must be loaded into the data model."
How can I create a data model connection? How can I schedule refresh for an excel file like this?
I'm a relative newcomer to Power View. I've been playing with charts and have been struggling to combine both line and bar on the same chart. It would appear this functionality is not available. Considering this is basic functionality when it comes to charting, how to achieve this....
I'm looking to replace text in a given column given a set of conditions in the other columns. Please see below the M query in the advance editor and in particular the bold text. Here I've created a new entry that would appear in the query applied steps window in the power query editor that I have called "Replace Values". The logic is if Data.Column4 column equals "London" then replace null values in Data.Column5 with London. However when I save the query below I get the error
Expression.Error: There is an unknown identifier. Did you use the [field] shorthand for a _[field] outside of an 'each' expression?
I plan to change the expression to test for multiple conditions however I need to get the basic expression working first. The other frustration i had with the "if" statement is it had to have an else even though I didn't require it, am i doing something wrong here?
I want to show on Power BI Dashboard a moving average - for example, I want to always show the last 30 measurement of body temperature but it looks like Power BI dashboard shows all measurements I have and compress them - which makes the dashboard ugly.
I tried to customize the X-axis properties but I dont know what I should change the default start/stop properties to (where the default property value is automatic).
We are trying to build dashboards using Power BI desktop version with Power Pivot data model as a back end. To import the data to Power BI,we used Import ->Excel workbook contents option and successfully imported the model to Power BI .When I tried to refresh the Power BI,It's hitting the backend tables available in the power Pivot model. But my requirement is I need to refresh the Power BI from the power Pivot Datamodel(I'm expecting an option like Import from Power Pivot in Tableau).
One big reason why we have turned to Power BI for Office 365 is to share Power Pivot models with our senior executives who all use Macs.
While they have been able to successfully view the Power Pivot models and interact with the Slicers in Power BI, they have not been able to print the reports. We thought it was a Mac issue but it's happening on PCs too. There doesn’t seem to be a way to resize the report so that all columns print. So we then tried to click on the tiny button located in the bottom right corner that says “View Full-Size Workbook.” The file then opens up in Excel Online. Then when we click on the printer icon, we see the report but the slicer becomes invisible. When printing, all columns do show up however the slicer is invisible.
How to successfully print Power Pivot reports with slicers in Power BI?
Ok, I'm doing a football database for fixtures and stuff. The problem I am having is that in a fixture, there is both a home, and an away team. The tables as a result are something like this:
It's not exactly like that, but you get the point. The question is, can I do a fixture query which results in one record per fixture, showing both teams details. The first in a hometeam field and the second in an away team field.
Fixture contains the details about the fixture like date and fixture id and has it been played
Team contains team info like team id, name, associated graphic
TeamFixture is the table which links the fixture to it's home and away team.
TeamFixture exists to prevent a many to many type relationship.
Make sense? Sorry if this turns out to be really easy, just can't get my head around it at the mo!
I would like to create a table called product. My objective is to get list of packages available for each product in data grid view column while selecting each product. Each product may have different packages type (eg:- Nos, CTN, OTR etc). Some product may have two packages and some for 3 packages etc. Quantity in each packages also may be differ ( for eg:- for some CTN may contain 12 nos or in other case 8 nos etc). Prices for each packages also will be different that also need to show. How to design the table..
Product name : Nestle milk | Rainbow milk packages : CTN,OTR, NOs |
CTN, NOs Price: 50,20,5 | 40,6
(Remarks for your reference):CTN=10nos, OTR=4 nos | CTN=8 Nos
Hello, I'm trying to follow a specification here: W1 = (W0^0.333 + G * D/1000)^3 (the ^ symbol means 'to the power') In my stored procedure I have used the following test data with unexpected results. Can anyone tell me if I'm using POWER properly? @Value = POWER(((POWER(150,(1/3)) + 2) * 10/1000),3 ) Thanks
I have a SQL server that's partly managed by another team for their vendored application. Anyway, they're having some weird issues with performance. Last weekend a guy tripped over the power cord (duh!). I looked at the logs and there were no recovery errors on restart and it seems to be running fine..
I'm looking for help on any diagnostics I can run to check anything else out?
Is it possible to convert the Power Point file to SQL Server Version 7 or 2000? My PowerPoint files have no images or pictures, only lyrics for songs that are used in the church during sunday services.
I was appointed to make a system in VB6 but my first job is to convert the PowerPoint files to SQL Server. There are 2000 songs in the powerpoint right now.
I need to write some SQL to do a power regression for a trendline. I have 2 columns of data which represent my X, Y data and all I'm after is the a and the b for the function y=ax^b. Has anyone ran into this before?? I know SSAS has a linear regression function but my data really only fits the power model.
Hello all, I am having a lot of trouble maintaining decimal precision in the script below. Specifically, the statement: SELECT @SKIPPROB = 1-(POWER(.999,@CURRENT - @LASTHIT)). This should be returning a large decimal value, but some of the values that are being updated into the Straights table keep showing up incorrectly rounded - sometimes as whole numbers. In one particular example, the statement above would become... 1-(POWER(.999,10201)). Note: 10201 is the result of @CURRENT - @LASTHIT. This should then equate to 0.99996306 as the answer to the formula. For some reason, when I open the Straights table, the result shown is 1.00000000. Please note that the result is loaded into the variable @SKIPPROB which is declared to be a Decimal(10,8). Also, the column in the straights table that this value is updated into is also set as Decimal(10,8). I am still very new to this, so am I overlooking something. I don't understand why the result is being rounded up to 1.00000000. Any help is appreciated. DECLARE @EXACT INTEGER, @CURRENT INTEGER, @LASTHIT DECIMAL(10,5), @AVGSKIP decimal(10,2), @MAXSKIP INTEGER, @1000AGO DECIMAL(10,4), @HITS1000 DECIMAL(10,4), @HITSLIFE DECIMAL(10,4), @SKIPPROB Decimal(10,8) SET @EXACT = 1000 WHILE @EXACT > 0 BEGIN SET @EXACT = @EXACT -1 SELECT @CURRENT = (SELECT MAX(GAME) FROM History) SELECT @LASTHIT = (SELECT MAX(GAME) FROM History Where EXACT = @EXACT) SELECT @AVGSKIP = (SELECT AVG(cast(exactskip as decimal(10,2))) FROM HISTORY WHERE EXACT = @EXACT) SELECT @MAXSKIP = (SELECT MAX(EXACTSKIP) FROM History WHERE EXACT = @EXACT) SELECT @1000AGO = (SELECT MAX(GAME) FROM History)-1000 SELECT @HITS1000 = (SELECT COUNT(EXACT) FROM History WHERE EXACT = @EXACT AND GAME > @1000AGO) SELECT @HITSLIFE = (SELECT COUNT(EXACT) FROM History WHERE EXACT = @EXACT) SELECT @SKIPPROB = 1-(POWER(.999,@CURRENT - @LASTHIT)) UPDATE Straights SET [CURRENT SKIP] = @CURRENT - @LASTHIT, [AVG SKIP] = @AVGSKIP, [MAX SKIP] = @MAXSKIP, [TIMES DUE] = (@CURRENT - @LASTHIT)/1000, [FLIPS] = (@CURRENT - @LASTHIT)/692.8005, [HITS 1000] = @HITS1000, [PERCENT 1000] = (@HITS1000 / 1000)*100, [TOTAL HITS] = @HITSLIFE, [PERCENT LIFETIME] = (@HITSLIFE / @CURRENT)*100, [SKIP PROB] = @SKIPPROB WHERE EXACT = @EXACT; END