It is not uncommon to find a wide range of situations among our customers in terms of virtual machine performance with SQL Server. In many cases, we find situations where performance levels are far from ideal but, in general terms, virtual machines themselves are not to blame. What usually happens is that when we move SQL Server to a virtual machine, we become constrained by a maximum or limited amount of resources (CPU/ memory/ IO) that is significantly different to that of the physical machine. (more…)
Companies are increasingly choosing cloud services such as Azure or AWS that normally provide a flexible, profitable and scalable option to carry out their operations without the restrictions imposed by on-premise technologies.
One of the issues that many of our customers face when attempting to migrate OnPremise instances to the Cloud is the lack of a simple “shared storage”. Although there are some alternatives supported by third-party software or SDS solutions that allow us to configure a Failover Cluster instance in Azure, these are highly complex, therefore adding significant further costs to the solution’s TCO.
In many scenarios, we face the need to use integrated authentication in order to gain access to the required data sources to feed our analytical system. In view of Azure’s increasingly widespread use, as is the case with at least part of our infrastructure, some of these sources are hosted in Azure databases. In this case, we will discuss an actual error that we have come across when configuring and using integrated authentication in Azure databases with SSIS.
In this entry, we will show you how to create bookmarks and a few different scenarios where they might be useful. Bookmarks are basically used to store the status of a specific report page including the filter selection and the visibility of the different objects, allowing the user to return to that same status by simply selecting the saved bookmark.
In this blog post, we will show you some information regarding the new Power BI functionality known as Dataflow, that already exists in services such as Office 365. We must highlight that this new service is still in the Beta stages, so it is currently subject to modifications and updates.
Although SQL Server Integration Services, hereinafter SSIS, is capable of uploading Excel files, in most cases it can be time consuming because any small modifications to the Excel files can make the SSIS crash. For that reason, the best option is usually to transform those Excel files into .csv format, since uploading text files will cause you significantly less issues than the Excel files.
You can quickly save any Excel file as csv manually by saving as .csv from within Excel. However, it becomes an issue when you have to do the same for a lot of Excel files, or in cases were you need the change to be done automatically.
In this post, we will explain how to do this format change automatically using PowerShell and how to loop through files in the same directory in order to upload several Excel files together using SSIS as the main tool for the whole process.
Regardless of the tools used for data analysis, normally the way to display the results is a Word document or a PowerPoint presentation.
In this post, we will create a PowerPoint presentation and insert a series of graphics and text programmatically, using the OfficeR and rvg packages together. We will also take advantage of the occasion to present (for those who do not know) the ‘Pipe’ operator, very useful when nesting functions.
In an on-premises environment when we propose solutions to geographical disasters, the most common option is log shipping. The use of asynchronous database mirroring or availability groups with asynchronous replicas is also common but includes an additional risk that is not usually contemplated. We refer to the “speed” with which the changes are transferred, as quickly as the network and the target system allow us. This means that when the disaster has a human origin, an important error when we become aware of it, we will have this error replicated and applied. Obviously, a better solution would be to combine both options, which are not exclusive, with which we would cover more disaster scenarios increasing the cost of the solution. (more…)
The increasing diversification of the type and volume of data, the lowering of computational processing costs and storage costs have opened a window of opportunity for the resurgence of a discipline that already existed on paper and among equations: Machine Learning.