Is the phrase “app modernization” just shorthand for refactoring and re-platforming apps to the cloud and dumping the legacy database you currently use?
It’s often touted as the only way for businesses with legacy databases — which is arguably most businesses — to meet user demands for real-time or near-time experiences. reality and to deal with the massive amounts of data that this entails. These experiences can be the instant provision of an Uber, an immediate credit decision, or a referral that simplifies a partner’s gift choice.
The same dynamic applies when it comes to internal data and apps, with users expecting the real-time experiences they get as consumers when it comes to checking inventory levels. or gain executive-level insight into key business performance metrics.
There’s a simple equation underlying this, Sanjeev Mohan, founder and director of data research and analytics company SanjMo, told The New Stack. “The faster the performance of a database, the higher the customer loyalty and engagement,” Mohan said. “It’s not just about query performance. It’s also how fast can you write — or load data into the database, or insert and update.
But as legacy databases are impacted by more and more data, he added, “your tables are getting very large, so performance can start to degrade.”
Tough times, tough choices
Few organizations are immune to the pressures of real-time demands. But not everyone can answer that by executing a seamless transition to a new cloud-native database.
This may simply be because an organization’s data isn’t going anywhere; for example, regulatory obligations could mean that the data simply has to remain on-premises for the foreseeable future.
An organization may have actually considered moving to the cloud, but was hesitant about the logistics of the migration. Or he’s looked at the numbers carefully and realized they don’t quite add up, especially if operations are to grow significantly.
It’s still a challenge, but the deluge of bad economic news is no doubt seeping into the cloud world, meaning costs are rising after years of prices per unit of performance seemingly only falling.
“I think the world will be in this type of hybrid state for much longer than people thought two or three years ago when money was incredibly cheap,” said product marketing manager Ryan Powers. for Redis Enterprise Cloud, at The New Stacker.
So for many IT managers, Powers said, the real question is, “How do you modernize with as little refactoring and rearchitecting as possible?”
After all, he points out that while “an increasing part of your applications need to be real-time, the reality is that a good part of them don’t”.
When that legacy database is still a keeper
This is where a caching layer, deployed alongside the legacy database, can come into play. As Powers said, “You need something that’s a flexible data layer that can be used as a buffer with all the databases you use.”
Redis Enterprise, for example, can be used for cache pre-fetching, where the application reads data stored in memory, rather than reading directly from disk, which speeds up queries, especially on workloads. intensive reading work. It also offers write-back caching, so data processed by the application is written to the cache layer in real time, with the central system of record database being updated asynchronously. , afterwards.
Both approaches promise to significantly reduce latency and improve user experience. Another way to apply it is to generate secondary indexes to speed up queries on secondary keys, which can be time consuming and complex when using legacy databases such as MySQL or Oracle.
When it comes to global, multicloud, and hybrid use cases, it’s important to think about how you ensure data stays consistent across all regions while ensuring applications run as well. quickly as possible, Powers added. Redis Enterprise features active-active geographic distribution, to enable local-speed read and write while ensuring consistent data is replicated across all regions, with less than a millisecond of latency.
So while the long-term goal is complete application modernization, Powers said, “There are places where you can still use Oracle or MySQL, and update us on the side, to fix it in the meantime, while you make these transitions. ”
In these cases, he argued, “modernization is about speed, scale, total cost of ownership.”
So the question of how to modernize your database becomes much more nuanced than whether you can afford the time and money to embark on a full refactoring and re-platforming project.
That said, there is a financial aspect to this approach, beyond the raw cost of the caching layer, Mohan pointed out: “I can maintain my investment in my old system, use it as a check-in system. , but then I can have much faster recovery.
Raise the limit, going nowhere
Maintaining a legacy database alongside a caching layer helps keep future licensing and infrastructure costs down. Once you hit the limits of your current installation, Mohan said, you’re faced with the pain of buying more licenses from your old vendor and beefing up your hardware accordingly. “But with caching, I can offload some of the workload to an in-memory database.”
So when do you know you’re ready for this modernization approach? First, you need to figure out if there are any use cases you’re struggling to support with existing relational databases that just aren’t fast enough, said John Noonan, senior manager of the product marketing at Redis.
You’ll know it’s there, Noonan told The New Stack, “if you have users waiting, whether they’re internal or not, or their customers.”
He cited the example of a client in the financial industry who was struggling to implement real-time payment processing, due to the number of Oracle tables that needed to be updated while processing transactions. Redis Enterprise was inserted into the checkout process to speed up transactions, with data then passed to the system of record after the transaction is complete.
But it is also a question of scale. Noonan recalled another example of a client, from a company that operated a fantasy sports platform in India, that would see huge usage spikes when team lineups for cricket matches were released 30 minutes before a game. This led to massive slowdowns, as its SQL-based system simply couldn’t scale in response.
So he implemented the publish in-memory composition feature with Redis Enterprise to break this bottleneck, with the data being translated back into their relational database after the match was complete.
This is precisely the kind of trouble New Zealand-based e-commerce company Blackpepper has encountered with what its CEO Alain Russell has described as its “incredibly write-heavy setup” using RDS, Elasticsearch, Redis and Dynamo through Amazon Web Services. (AWS).
“We were having scaling issues under heavy loads and hitting incredibly high costs to scale RDS instances to handle the loads,” Russell told The New Stack. The company also encountered problems synchronizing data.
Testing has shown that using Redis Enterprise as the primary data store can deliver 20-30x speed improvements on some common business tasks. It also solved the data synchronization problem. “Moving this to Redis Enterprise simplified our architecture, simplified how we debug, and gave us a single data store to look at,” Russell said.
Ultimately, many organizations may want to move into the cloud or even ditch their legacy data infrastructure altogether, Mohan said. But re-architecting and modernizing will always be easier said than done. Caching gives companies the opportunity to extend their legacy platform, at least in the medium term.
“So your modernization strategy is: Step one, switch to the cloud. Step two, implement a caching solution and become a bit more cloud-friendly,” Mohan said. “Step three could be to reformat your legacy environment to a cloud-native offering.”
The beauty of this approach is, perhaps, that you can simply take the second step and still benefit from it.