This session is intended for database administrators and database developers, who have a basic understanding of indexes, statistics and partitioning.
A common use case in many databases is a very large table, which serves as a fact table or an activity log, with an ever-increasing date/time column. This table is usually partitioned, and it suffers from heavy load of reads and writes. Such a table presents a challenge in terms of maintenance and performance. Activities such as loading data into the table, querying the table, rebuilding indexes or updating statistics become quite challenging.
The latest versions of SQL Server, including 2017, offer several new features that can make all these challenges go away. In this session we will analyze a use case involving such a large table. We will examine features such as Incremental Statistics, New Cardinality Estimation, Delayed Durability and Stretch Database, and we will apply them on our challenging table and see what happens…
Why I Want to Present This Session:
As a data platform consultant, I encounter a lot of case studies with customers that involve very large tables. I see people struggle with such tables again and again. Throughout my career, I gained experience with working methodologies against such tables, and I believe that this session can be useful to a lot of DBAs and developers. I have already presented this session in several occasions, such as PASS Summit and SQL Saturdays, and I received very good feedback.
Latest posts by Guy Glantser (see all)
- Working with Very Large Tables Like a Pro in SQL Server 2017 - September 10, 2019
- Advanced Query Tuning Techniques - June 26, 2018
- How to Use Parameters Like a Pro and Boost Performance - April 21, 2017