Skip to content
SQL Server Blog

Businesses today are faced with the challenge of staying profitable and investing in future innovations. Speed and flexibility are the name of the game, and effective data-scaling techniques are highly sought after. Yet exploding data volumes, diverse data types, and numerous database management systems make it harder than ever for data professionals to consolidate data and synthesize key business insights.

SQL Server 2019 big data clusters simplify security, deployment, and management of all of your key data workloads and data lakes, while including innovative security and compliance features, industry-leading performance, and mission-critical availability of the platform.

In this webinar I will join Travis Wright, Principal Program Manager at Microsoft to demonstrate how to simplify big data to make faster and better business decisions with the new SQL Server 2019 big data clusters feature. Learn more about how the latest edition has evolved beyond your grandfather’s SQL Server to a unified data platform that includes distribution for Hadoop, Apache Spark™, and AI. Attend ready to learn about:

  • Using data virtualization, integrate, query, and retrieve all of your data from relational, non-relational, or unstructured data sources—including big data—without replicating or moving the data.
  • Easily manage all of this data with a big data cluster:
    • Managing relational and big data can pose major problems such as complex scale-out deployment, time-consuming patching and upgrades, and cumbersome security management. With SQL Server 2019 big data clusters, you can store high volumes of data in a data lake and access it easily using either SQL Server or Apache Spark™.
    • You’ll have access to management services, an admin portal, and integrated security in Azure Data Studio, which makes it easy to manage and create a unified development and administration experience for big data and SQL Server users. You’ll also learn how you can easily deploy SQL Server on containers, OpenShift, and Kubernetes.
    • Reason over all of your data using advanced analytics and artificial intelligence that’s built into the platform. You’ll learn from the demo how you can use machine learning with built-in Spark notebooks to easily feed integrated data from many sources to your model training.