Insightful session at Mydbops Opensource Database Meetup 14 in Bangalore as our Chief Technology Officer, Manosh Malai, delves deep into the world of MongoDB optimization. In this engaging presentation, Manosh explores the two primary sharding strategies - Vertical and Horizontal, providing valuable insights and real-world use cases. Gain a comprehensive understanding of the fundamentals of MongoDB sharding, including the pros, cons, and practical applications of both Vertical and Horizontal strategies. Explore real-world case studies and performance benchmarks to optimize your MongoDB deployments.
1. Scaling MongoDB with Horizontal
and Vertical Sharding
Manosh Malai
CTO, Mydbops LLP
07th Oct 2023
Mydbops 14th Opensource Database Meetup
2. Interested in Open Source technologies
Interested in MongoDB, DevOps & DevOpSec Practices
Tech Speaker/Blogger
CTO, Mydbops LLP
Manosh Malai
About Me
9. When To Shard - I
Size of Data: If your database is becoming too large to fit on a single server,
sharding may be necessary to distribute the data across multiple servers.
Performance: Sharding can improve query performance by reducing the amount
of data that needs to be processed on a single server.
10. When To Shard - II
Scalability: Sharding enables you to horizontally scale out your MongoDB
database by distributing data across multiple nodes.
Availability and Redundancy: Sharding can improve query performance
by reducing the amount of data that needs to be processed on a single
server.
11. When To Shard - III
Availability: Sharding can improve the overall availability of your database
by providing redundancy across multiple nodes.
Flexibility: Sharding enables you to distribute data across multiple nodes
based on your specific requirements.
15. Vertical Sharding Strategy - Pros
Different data access patterns:
Vertical sharding may be useful when different table are accessed at different frequencies or
have different access patterns.
âȘ
By splitting these tables into different shards, the performance of queries that only need to
access a subset of columns can be improved.
âȘ
Better data management:
Vertical sharding can provide better control over data access, as sensitive or confidential data
can be stored separately from other data. This can help with compliance with regulations such
as GDPR or HIPAA.
âȘ
16. Vertical Sharding Strategy - Cons
Data Interconnectedness:
Vertical sharding may not be the best solution for databases with heavily interconnected data. If
there is a need for complex joins or queries across multiple columns, horizontal sharding or
other scaling strategies may be more appropriate.
âȘ
Limited Scalability:
Only Suitable for Small or Medium data size.
âȘ
17. How We Can Achieve Vertical Sharding?
Service Discovery
âȘ
Consul
âȘ
Etcd
âȘ
ZooKeeper
âȘ
Data Sync
âȘ
Mongopush
âȘ
mongosync
âȘ
mongodump&mongorestore
âȘ
19. Vertical Sharding: Service Discovery and Data Migration
Use Consul to dynamically discover the nodes in your MongoDB cluster and route traffic to them accordingly.
âȘ
Mongopush sync the data from X1 Cluster to X2 Cluster
âȘ
22. What MongoDB Horizontal Sharding and Its Components
Each shard contains a subset of the sharded data
Mongos
Config Server
Shards
23. Shard Key
Collection Shard Key
Divide and distribute collection evenly using shard key
The shard key consists of a field or fields that exists in the every document in a collection
24. MongoDB Shard Key
IO Scheduler
Range Sharding
Hash Sharding
Zone Sharding
Pros Cons
Even Data Distribution
âȘ
Even Read and Write Workload
Distribution
âȘ
Range queries likely trigger
expensive
âą
broadcast operation
âą
Pros Cons
Even Data Distribution
âȘ
Target Operation for both single
and ranged queries
âȘ
Even Read and Write Workload
Distribution
âȘ
Susceptible to the selection and
usage of good shard key that used
in both read and write queries
âą
Pros Cons
Isolate a specific subset of data on
the specific set of shards
âą
Data geographically closet to
application servers
âą
Data tiering and sla's based on
shard hardware
âą
Susceptible to the selection and
usage of good shard key that used
in both read and write queries
âą
26. Shard Key Indexes
2.0 + 100%
Single-field Ascending Index
2.0 + 100%
Single-field Hashed Index
2.0 + 100%
Compound Ascending Index
4.4+ 100%
Compound Hashed Index
27. Declare Shard Key
sh.shardCollection("db.test", {"fieldA" : 1, "fieldB": "hashed"}, false/true, {numInitialChunks: 5, collation: { locale: "simple" }})
sh.shardCollection(namespace, key, unique, options)
When the collection is empty, sh.shardCollection() generates an index on the shard key if an index for that
key does not already exist.
âȘ
If the collection is not empty, you must create the index first before using sh.shardCollection()
âȘ
It is not possible to have a shard key index that indicates a multikey index, text index, or geospatial index on
the fields of the shard key.
âȘ
MongoDB can enforce a uniqueness constraint on ranged shard key index only.
âȘ
In a compound index with uniqueness, where the shard key is a prefix
âȘ
MongoDB ensures uniqueness across the entire key combination, rather than individual components of the
shard key.
âȘ
28. Shard Key Improvement After MongoDB v4.2
WITHOUT PREFIX COMPRESSION
Mutable Shard key value (v4.2)
Refinable Shard Key (v4.4)
Compound Hashed Shard Key (v4.4)
Live Resharding(v5.0)
29. What and Why Refinable Shard Key (v4.4)
Shard Key: customer_id
Refining Shard
Key
db.adminCommand({refineCollectionShardKey:
database.collection, key:{<existing Key>, <New Suffix1>: <1|""hashed">,...}})
21%
15%
64%
Shard A Shard B Shard C
Refine at any time
âȘ
No Database downtime
âȘ
Refining a collection's shard key
improves data distribution and resolves
issues caused by insufficient cardinality
leading to jumbo chunks.
30. Refinable Shard Key (v4.4)
Shard Key: vehical_no Refining Shard
Key
db.adminCommand({refineCollectionShardKey: "mydb.test", key:
{vehical_no: 1, user_mnumber: "hashed"}})
Avoid changing the range or hashed type for any existing shard key fields, as it can lead to
inconsistencies in data. For instance, refrain from changing a shard key such as { vehicle_no: 1 }
to { vehicle_no: "hashed", order_id: 1 }.
For refining shard keys, your cluster must have a version of at least 4.4 and a feature compatibility version of 4.4.
âȘ
Retain the same prefix when defining the new shard key, i.e., it must begin with the same field(s) as the existing
shard key.
âȘ
When refining shard keys, additional fields can only be added as suffixes to the existing shard key.
âȘ
To support the modified shard key, it is necessary to create a new index.
âȘ
Prior to executing the refineCollectionShardKey command, it is essential to stop the balancer.
âȘ
sh.status to see the status
âȘ
Guidelines for Refining Shard Keys
33. Resharding Process Flow
Before starting a sharding operation on a collection of 1 TB size, it is recommended to have a minimum of
1.2 TB of free storage.
âȘ
I/O: Ensure that your I/O capacity is below 50%.
âȘ
CPU load: Ensure your CPU load is below 80%.
âȘ
Rewrite your application's queries to use both the current shard key and the new shard key
rewrite your application's queries to use the new shard key without reload
Monitor the resharding process, use a $currentOp pipeline stage
Deploy your rewritten application
34. Resharding Who's Donor and Recipients
Donor are shards which currently own chunks of the sharded collection
âą
Recipients are shards which would own chunks of the sharded collection according to the new
shard key and zones
âą
35. Resharding Internal Process Flow
Commit Phase
Clone, Apply, and Catch-up
Phase
Index Phase
Initialization Phase The balancer determines the new data distribution for the sharded collection.
A new empty sharded collection, with the same collection options as the original one, is
created by each shard recipient.
This new collection serves as the target for the new data written by the recipient shards.
Each shard recipient builds the necessary new indexes.
Each recipient of a shard makes a copy of the initial documents that it would be
responsible for under the new shard key
âą
Each shard recipient begins applying oplog entries from operations that happened after the
recipient cloned the data.
âą
When all shards have reached strict consistency, the resharding coordinator commits
the resharding operation and installs the new routing table.
âą
The resharding coordinator instructs each donor and recipient shard primary,
independently, to rename the temporary sharded collection. The temporary collection
becomes the new resharded collection
âą
Each donor shard drops the old sharded collection.
âą
37. To summarize, what issue does this feature resolve?
Jumbo Chunks
âą
Uneven Load Distribution
âą
Decreased Query Performance Over Time by Scatter-gather queries
âą
38. Improvement From Mongodb 5.2 and 7.X
Default Chunk Size 128 megabytes - 5.2
âą
AutoMerger - 7.0
âą