More expensive is not always better Its always tempting when designing a technical architecture to think that more expensive is better particualry when buying hard disks.
The lure of a 15k RPM can seem appealing in terms of getting the required iOPs for your system.
What most people don’t realise though is that it can actually be more cost effective to get higher iOPs using 10k RPM disks. Not only that, but you get a load of extra storage to boot. I discovered this article recently which discusses how you can get much better value for your money by buying 10k disks and configuring them for an additional performance boost.
On a recent system I designed, I had been experimenting with the idea of creating an active active processing setup to handle a very large amount of data in a short time - 70 million records in 4 hours equating to 5000 transactions per second. Sql Server Service Broker is an excellent technology when it comes to scaling to high throughput and responding to peaks and troughs. The use of asynchronous processing means that now everything has to be done immediately and certain types of databases activities can be performed when the resources become available.
Scaling to this kind of peak load in a short window becomes wasteful if the kit is only used for a small part of the day. As well all new, it gets disproportionately expensive to scale up in terms of hardware and hosting costs, particularly when you have a need to use high end hardware.
Wouldn’t it be better if you scale out the load on to multiple smaller systems. A great idea I thought so I started pursuing, but when you look at systems that require Enterprise edition for whatever reason ( DR, online index rebuilds etc) the licence costs of Sql Server become ridiculous. While the concept of scale out seems to work well with open source where there are no licence costs, in the Microsoft world it doesn’t seem feasible unless you have lots of dough.
It was interesting doing a comparison recently for the benefits of colocation vs a managed service for implementing a new system with 24x7 availability. On paper the managed service seemed to be more than double cost when factoring in only the hardware costs, but what do you get for your money with a managed service is access to a range of specialists in every area of the technology stack ( storage, network, vmware, Sql Server, security). What I could see is that both systems have their merits but here is a summary of the benefits of each:
- Specialists available covering entire technology stack available 24x7
- Need to conform to a more structured infrastructure management model ( ie submission of tickets)
- Ease of provisioning new hardware and storage ( shared San and Network infastructures that can easily be used to expand quickly)
- Cost – Generally much more expensive
- Lack of flexibility in infrastructure management
- Massive cost savings
- Ability to invest in new staff who are tailor made for the role
- The need to have multiple disciplines included in recruitment profile
- Difficulty in expanding and reacting to new technology needs ( ie provisioning new servers and storage)
- No hardware replacement program as standard
I was recently called to do a band aid fix on a reporting solution that had become key to the business. It was written in a combination of excel and VBA and consisted of both ETL routines to load data from other spreadsheets as well as doing complicated calculations, pivot tables and acting as a reporting layer.
While Excel sometimes proves to be a great all round tool for those that know it inside out, development can become complicated when asked the scope begins to creep. It is also notoriously difficult to debug. Natively it is not easy to refer back to source to target mappings and in essence is not an ETL tool. It also lacks support for hierarchies.
Now while I was amazed at the creativity of the developer to combine all of the features into one, it was certainly not a maintainable solution. In this particular case the developer had moved on. I can understand why companies in their eagerness to get a solution to market as soon as possible, cut corners and find the fastest, dirtiest way to get results but it comes at a cost. The Business tends to get dependant on these flimsy reporting solutions and when they break they cause chaos.....
Sql Server Consolidation 101 -Using Windows System Resource Manager (WRSM) to limit number of Processors used by SSASWritten by mark
During a recent consolidation project there were a great deal of Sql Server instances being consolidated onto one server. The BI requirements were such that the SSAS cube could be refreshed on demand by the end user which had the potential to cause bad performance issues to operational systems running during the day. I had a need to throttle down the number of CPU's used by SSAS. The approach I took was to use Windows System Resource Manager (WRSM) to perform the grunt work.
Here are the steps I took
2-Setup new “process matching criteria”
4-Select Registered service
5-Setup new resource Allocation Policy
6-Add the process matching criteria setup earlier:
The rule could be set up against a percentage of cpu used across the whole box, however because we are looking at partitioning by instance its best to tie the usage to specific CPUs. This is set on the advanced tab.
And voila you now have the SSAS instance bound to use only certain CPU's