If you’re running Magento, you already know that OpenSearch (or Elasticsearch, its “big brother” who can’t decide if he’s Open Source or a non-profit) isn’t optional. Without it, your search bar is a wasteland and your categories are a constant 404 error. It’s the heart of your store, and like any heart in this industry, it occasionally decides to give you a massive heart attack. 💔
Managing OpenSearch is like feeding a Gremlin: it seems nice enough until you give it a full reindex after midnight. Suddenly, the Heap memory spikes, the Circuit Breaker trips, and your store hangs harder than a Dalí painting. 🎨😵
Today I’m telling you how I went from having a “zombie” store—no products in categories and a search bar returning an existential void—to actually taming the beast. 🧟♂️✨
💀 The Crime Scene (The Symptoms)
You wake up, open your email before the coffee is even brewed, and you have more alerts than an aircraft carrier in the middle of a naval battle. You start digging, and this is what you find:
Store in Panic Mode: The search bar doesn’t even say “thanks,” and the products in the categories have vanished like a magic trick. No OpenSearch, no catalog. End of story. 📉
Bleeding Logs: You open exception.log and it’s bloated with Error 429 (Too Many Requests) or index_not_found_exception. It’s the engine’s polite way of saying: “I’m overwhelmed, leave me alone.” 🤬
CRONs in a Coma: You check the cron_schedule table and see indexer_update_all_views marked as Missed. Magento tried to launch them, OpenSearch slammed the door in its face, and the CRON went to take a nap. 😴
Useless Restarts: You restart the OpenSearch service, feel like a systems god for five minutes, and then… bam! it crashes again. It’s Groundhog Day, but with more spite. 🔄🧨
🕵️♂️ The Investigation (Descending into Hell)
This is where I started sweating. Why on earth is the memory filling up if “in theory” the server has RAM for days? Pulling the thread, these three gems popped up:
1. The Blue-Green Trap 🔵🟢 Magento uses a Blue-Green reindexing mechanic. To avoid leaving the store empty while reindexing, it creates a new index, dumps the data, and when it’s done, it performs a “swap” and deletes the old one. The problem: at the moment of the swap, you have two live indices taking up space. If your index is 2GB, you need 4GB of Heap just for that microsecond. If you’re cutting it close… well, you get the idea.
2. The Silent Killer: Cache and Syncs 🕵️♀️ Turns out there’s a search cache that keeps growing. If you enter a category, OpenSearch saves the result “just in case.” When memory usage hits 80-85%, the Garbage Collector passes by to clean what it can. But to clean RAM, surprise!, you also need RAM—and time. 🤯 If you have scripts syncing stock, prices, or cache warmers running like crazy, you aren’t letting the Java “janitor” sweep the floor. Memory climbs and climbs until there’s no turning back. 📈🏚️
3. The Circuit Breaker (The Panic Button) 🚨 When OpenSearch sees it’s about to explode, it trips the Circuit Breaker. It blocks any new requests. The server will tell you it’s “running,” but in reality, it’s taking a nap so it doesn’t die. And it won’t wake up as long as you keep harassing it with requests. 💤
You can monitor the patient with this command (put it on a second screen and enjoy the drama):
watch -n60 "curl -s -X GET 'localhost:9200/_nodes/stats/jvm?pretty' | grep 'heap_used_percent'"
(Watching the percentage creep toward 90% is scarier than a junior’s commit to production on a Friday at 6:00 PM). 😱
💡 The Solution: Taming the Beast
The solution isn’t always throwing money at the problem and adding more RAM… let’s see…
Golden Rule: Update by Schedule ⏳ If you have your indices set to Update on Save, you’re playing Russian Roulette. Set them to Update by Schedule. Let the CRON work in batches. And forget about nightly full reindexes; they just make a mess and, honestly, are almost never necessary if everything else is right. If for some dark magic reason you really need it, don’t reindex—just invalidate and let the cron deal with it.
Calculating the Heap 🧪 Don’t just pick a random number. The rule is: you must be able to perform two full reindexes in a row without OpenSearch exploding. If it crashes, don’t overthink it: you lack memory. Increase it, restart, and start over.
It might seem excessive, but that cache eats more than a growing teenager. If you can, double the value you get. 💰
Where do you change it? In the jvm.options file (usually at /etc/opensearch/jvm.options):
Bash
# /etc/opensearch/jvm.options
# Set Xms and Xmx EQUAL so Java doesn't waste time resizing.
-Xms4g
-Xmx4g
(Careful: don’t give it more than 50% of the server’s physical RAM; the OS needs to breathe too, poor thing).
Tuning the env.php 🧙♂️
Configure the batch_size in your app/etc/env.php. Start with low values (like 50) and go up until reindex times are optimal. As you increase batches, times will drop. If you go too high and times get worse, dial it back. That’s your “sweet spot.” 🍬
'indexer' => [
'batch_size' => [
'cataloginventory_stock' => ['simple' => 50],
'catalog_category_product' => 50,
'catalogsearch_fulltext' => [
'partial_reindex' => 50,
'mysql_get' => 50,
'elastic_save' => 50
],
'catalog_product_price' => [
'simple' => 50,
'default' => 50,
'configurable' => 50
],
'inventory' => [
'simple' => 50,
'default' => 50,
'configurable' => 50
]
]
]
🤘 Conclusion for when OpenSearch explodes (which it will)
Don’t size your server for when the store is calm. Size it for the chaos. 🌪️
Run stress tests: do a full reindex while your stock importers are running. If it blows up, either give the Heap more breathing room or make your importers less aggressive (throw in a 0.5s sleep between products, what’s the rush?). 🐢
If you don’t leave dead time for the Garbage Collector to do its magic, OpenSearch will take its revenge. And believe me, it has a very long memory. 🤖💀
Good luck, and may the Garbage Collector be with you! 🤘🔥

So, what do you think ?