One of the most important aspects in the adoption of a new usage model for information technologies – such as cloud computing – is the use which users and organizations make of that model in practice. There are those who insist on having all of a companies’ computing infrastructure moved to the cloud. At SolidQ, however, our advice is two-pronged: apply common sense, and look for the business advantage. And we understand the concept of business advantage in its widest sense – human, financial, competitive, quantitative and qualitative.

We therefore consider that it is essential to first have an in-depth knowledge and understanding of the technological proposal, and then analyze those processes and services, selecting those on which the technological proposal can have the biggest impact, so that both the technological objective and the business advantage are achieved. And we apply this same approach both to choosing cloud computing for SolidQ, as a technological alternative to the other platform and service components that we currently develop, subcontract or purchase, and to advising and supporting our customers when it comes to adopting a cloud solution or service.

Figure-11 Figure-21

This article describes how we have applied these principles to the creation of an SQL Server monitoring service using the Windows Azure platform both for our own use and as a value proposal for our customers.

Context: the need

Monitoring is one of the tasks of database administrators. This means checking that the whole SQL Server-based data infrastructure is working correctly in the hardware and software environments in which it has been deployed. In this respect, technology experts take responsibility for cataloguing the different database servers and for establishing control mechanisms which allow them to observe changes in the machines’ workloads, the performance of computer operations, I/O effectiveness and efficiency and the general functionality of the data objects that support the company’s whole infrastructure: tables, views, stored procedures, restrictions, etc.

The process usually consists of using tools – whether free or paid for – for this task, or implementing approximations by using automating and reporting mechanisms such as PowerShell, the SQL Server agent, task programming or services which provide reports both in digital form and for printing. Depending on the strategy adopted, the process involves installing and configuring software, knowing and using commands, and creating the said reports. These reports, which may be simple or more elaborate, extract that information which is essential for detecting problems and for planning any infrastructure improvements required to meet a real business need for data processing.

In this context, querying data or monitoring in the form of a dashboard is usually limited, in the best case, to either access from one of the company’s own computers to query a series of web pages or online reports, or, if access is from outside the company, by connecting remotely or over a VPN, which limits the scenarios and access points from which this query can be made.

This is the initial scenario in which, in view of the services offered by Windows Azure (Microsoft’s cloud platform), SolidQ detected the possibility of quickly implementing a new monitoring service which would bring together the knowledge our mentors have acquired in several fields (SQL Server database, .NET programming, collaboration and Business Intelligence) and unite this set of technologies in the form of a service. The service would allow one or more instances of the different SQL Server servers in the company’s scope to be subscribed to the service. It would also allow a series of monitoring services to be displayed on different platforms, (PC, mobile device, Windows 8 tablet, etc). These services would use a set of cloud Business Intelligence services (SQL Azure databases, Azure Data Sync and SQL Azure Reporting). Database administrators could quickly access and consume these visual services, whether they whether they were using a Web browser, a Windows Phone mobile or Windows 8, and their user experience would be improved by integration with desktop social media applications.

SQL2Cloud: the hybrid service (on-premises/cloud)

SQL2Cloud is a service which monitors various aspects of SQL Server operations. By using the DataCollector API, it collects information, from different sources, about the SQL Server, the operating system, and other characteristics, such as CPU usage, memory usage and I/O operations. Several sets of SQL Server performance counters are also collected, so that DBAs can detect problems and make a more detailed analysis of the possible causes of anomalous behavior, which doesn’t always mean that there is a real problem.

Figure-31

The process of installing and using SQL2Cloud starts with the automated creation of a data warehouse in which the results of the captures of these performance counters will be stored over time. DataCollector can be activated in different machines and instances, and the information can be directed to a single point of control, the data warehouse for performance data (SQL2CUDW). The difference between this and the full capture of all the data collected by default by SQL Server’s Data Collector lies in our mentors’ know-how, since they select those KPIs that can help to uniquely identify a potential problem or determine a monitoring point. The process of installing the service in the customer begins with creating the SQL2CUDW database and then subscribing as many SQL Server servers and instances as the customer may need (the ones he wishes to have under more direct control because of their importance to the business).

With the principal data repository ready, the next step consists in subscribing database instances to the service. This process is carried out by designating which servers and instances are to be added to the service and performing an automated process of creating sequences of PowerShell commands that initialize the interface for collecting data on these instances’ performance and establish the packets for integrating and planning the SQL Server jobs which are needed to collect this information at the predetermined (modifiable) time intervals and also the amount of information which is to be saved for a given amount of time (by default, the time for which data is kept is set to 60 or 90 days). The subscription process ends by initializing the collection processes and setting everything in operation, so that in a few minutes the SQL2CUDW will start to receive the first data from each of the subscribed servers.

Figure-4

Once this process has completed, and if the central server that hosts the SQL2CUDW has a direct internet connection through the 1433 (SQL Azure) port, everything is ready to deploy an SQL Server Integration Services (SSIS) packet which is generated as part of the installation process. This packet is responsible for uploading the data between the different timed executions, from the SQL2CUDW database up to the SQL2CCDW database on SQL Azure, which is created during the installation process. SSIS sends the differential data each time the packet is run (planned in an SQL Server job). The cloud reporting service (SQL Azure Reporting) accesses the SQL2CCDW database to display the performance reports from a central dashboard, with summarized information about the subscribed instances and a navigation system which allows the user to drill down into the information about each performance counter.

For those cases in which, for reasons of security, the central database does not have any external connectivity, it is possible to create a second data warehouse (which can be on SQL Server or SQL Server Express) and which, for the purposes of the service, we call SQL2CGTW (SQL2Cloud Gateway). If this second database is used, a second SSIS packet is created during the subscription process, which allows both BCP and SSIS to be used for sending the data from the first warehouse (SQL2CUDW) to the secondary repository (SQL2CGTW). From here onwards, the process of uploading data from the SQL2CGTW database to the SQL2CCDW cloud database follows the same steps as those described for uploading data from the principal performance database.

Synchronizing data with Azure Data Sync

In version 1.2 of SQL2Cloud an additional component has been added, which facilitates the task of loading performance data from the on-premises databases to the SQL Azure database. By using the Azure Data Sync service, the process includes installing the Azure synchronization agent in the machine which has connectivity both to the SQL2CGTW database, and to Internet through the 1433 (SQL Azure) port, configuring the agent (by indicating the database and the user which has connection permissions) and securitizing the communication between the agent (on-premises database) and the cloud synchronization service (Azure Data Sync deployed to support the service). The securitizing is carried out by using a security token which is generated from the cloud administration, and which must be included in the agent’s configuration, in order to be able to authorize bidirectional or unidirectional data communications. This synchronization service has several advantages.

Figure-5

With respect to BCP and SSIS, the synchronization service includes an exact processing of the communication and synchronization process. If it has not been possible to synchronies one of the tables, this exact processing allows, once connectivity is restored or the problem is resolved, all the data to be synchronized automatically from the last update point. Since the responsibility for synchronization and for communication with the cloud service has been delegated, the user in the company has to carry out less direct maintenance than when BCP or SSIS is used.

Since planning and executing the data transfer is carried out in the cloud, the process will not be interrupted if the SQL Server agent or the SSIS engine stops at a given moment and interrupts the communication. We can thus take full advantage of one of the great advantages of cloud computing, high availability; in general the service levels are higher in a public cloud environment than in an on-premises environment.

The Azure Data Sync service allows certain filter levels to be established, both for the columns to be synchronized and of conditions on data rows, which determine which of the data in a table are synchronized to the cloud. In the case of this service, this characteristic does not add much, since the participating databases have been configured by default so that they will contain all the relevant information for analysis and reporting.

The final advantage of the Data Sync synchronizing process is the existence of different execution logs for each of the databases involved in the synchronization (both at the customer’s premises and in the cloud). In SQL Server, the execution of SSIS packets can be monitored, but in the case of an error in an SQL Azure operation, the only information recorded in the SQL Server execution logs will be a failure of Azure to connect. In Data Sync, more information is stored about the result of the operation: if the synchronization agent has produced an error, or there has been an error in synchronizing the data schema, or because another synchronization operation is being performed which has lasted so long that it causes synchronization operations to overlap (in the synchronization service it is possible to establish intervals of five minutes or more between the databases participating in the topology).

SQL2Cloud User Experience: client devices

Figure-6

In order to improve the user experience for a database administrator programming the service, the intention has been to extend the range of clients that can consume and display the monitoring information beyond the PC and Web applications which are principally used for working with this type of system. To this end, a mobile client programmed for the Windows Phone 7 Mango update (SDK 7.5) has been introduced. Since it is not currently possible to display SAR on a mobile device, the user experience is not exactly the same; it has been implemented through programming and accessing screens, over a web service, which are similar in style to the SAR application. There is a main control panel or dashboard where there is a Pivot control with the different instances and the same traffic light system summarizing each instance’s performance status. From this screen, linear navigation by principal indicator/counter is enabled, giving details, with cheese, line and bar graphs which allow the administrator to see at a glance the overall status of the same general indicators as in the Web experience, although the same level of detail and interaction as that currently achieved with Web applications and the SAR client is not reached.

Figure-7

 

As part of our organization’s R&D&I projects, a research team specializing in interaction and development is currently analyzing the possible user experience and implementation of a Windows 8 client. This client would have the ability to use social characteristics and alerts (tiles, toast, push notification, etc), directly with applications for Microsoft’s new operating system, which is currently available in a Community Preview version.

Future plans and service evolution

SQL2Cloud began as an experimental project to make full use of the advantages and characteristics of cloud computing, in the form of technology services for database administrators, with the aim of providing information about the database systems which are running in the user’s facilities (on-premises). With the passing of time, and the maturity of Microsoft’s cloud computing platform, Windows Azure, the service has become a real tool that can be used in our own organization and which we have made available to our customers. As more and more companies are incorporating this new service into their utility set for administering and managing databases, we are obtaining more feedback, both on the functionality and characteristics which are currently available and on the possible improvements that would facilitate administration tasks. The functionality which is currently most requested, and on which work is being done to set it up, is a system of alerts which would, via email and mobile notifications, inform the user when one or more indicators exceed the threshold values established in the configuration. This functionality would undoubtedly provide an automated system which would give service users a more exhaustive control of their critical SQL Server systems, and allow them to take action at a time and in a way which would minimize the impact of a problem, even before it happens.

Further information or demonstration

If this service may be of value to your organization and make a difference to it, and you would like to know more about the installation and configuration process, the requirements for the server, the various data synchronization systems, and the Web, SAR and mobile (Windows Phone 7) clients, please contact your closest SolidQ office and we will contact your technical team to arrange a meeting and answer all your queries.

Figure-8