Optimize Your Products with AI: 5 Key Factors to Consider for Success

“Done is better than perfect.”

The mantra shared by several icons of Silicon Valley. The phrase doesn’t mean ‘ship before a product is ready.’ Or release a technology that’s not up to scratch. 

It means that getting something in front of users — and learning from it in order to optimize — is better than seeking perfection at the cost of never finishing. And the sentiment spills over into artificial intelligence, albeit in slightly more technical terms.

During his Turing Award lecture, American computer scientist and professor emeritus at Stanford University Donald E. Knuth said:

“Premature optimization is the root of all evil (or at least most of it).”

Is he saying optimization (or perfection) is evil? Far from it. His point simply reflects the feelings of the Valley: that trying to do too much, too soon, is the birthplace of an evil that comes at a high cost. But this begs the question… 

‘How is optimization… bad?’

“Optimize — to make as perfect, effective, or functional as possible.”

Merriam Webster

There is no better solution than the one that’s as perfect, effective, or functional as can be. Optimal is the pinnacle. So why not optimize everything, always, and as early as possible? If we lived in a world of perfect information, we would. 

But we don’t. There are always unknowns, many at the outset. And so if you try to optimize too early in your journey — prematurely, as Donald puts it — you risk paying a high price for optimizing the wrong thing. Why so? Several reasons.

Upfront Costs

It takes time to keep developing. Time costs money. So, the longer you keep working on a solution, the more you have to invest upfront. Moreover, as you get deeper into a project, it gets harder to spot the finer details that will make a product better. This makes the work take even longer, and development costs quickly mount.

Hidden Costs

That said, the early days of development are relatively predictable. You know you can get 80% of the work done within a certain timeframe. However, as you delve into the final 20%, every additional line of code increases the cost of development exponentially. Why? Because you have to start making more assumptions about your users.

You have to guess edge cases and niche issues. And with every new line of code, the review process becomes heavier. Meaning the more ‘prematurely optimized’ the solution, the more costly the next stages become, including… 

Testing Costs

Complex solutions contain more bugs. It’s the nature of the work. And it’s hard to stay on top of a growing bug mound, which can make an apparently-optimized product extremely difficult to test.

Maintenance Costs

Developers often forget that code requires upkeep, and this cost can get lost in search of perfection. If development causes financial pain upfront, maintenance could cause agony down the line.

If you optimize prematurely, remember: you’re only creating new dependencies — it’s best to keep things simple early on and only add features you know you need.

Trade-offs

If you optimize one metric, it may come at the expense of another. Say you run an eCommerce store, and you want to switch to process orders one-by-one: sure, this could reduce average wait time, but it will also impact the throughput and slow down everything else.

On the other hand, if you continue to process in batches, an individual customer may have to wait longer for a single order, but the total throughput will be better, which could be a better outcome in the end — but how can you know which solution is optimal? 

You wait… and you learn.

When You Wait, You Learn.

The ‘wait-and-learn’ strategy has several benefits.

First, it gives you the space to assess if you need to optimize. To answer the question, ‘Will the results earn — or save — me enough money to warrant investing?’ If not, don’t waste your resources. 

…and never optimize in the hope of striking perfection. Instead, always identify a need first, then choose the specific parameters to optimize. In doing so, you will give yourself metrics to measure so that you can analyze the impact of your work.

Second, you’ll often find that playing the waiting game gives you time to spot ‘hidden’ optimizations. You give yourself time to uncover insights into the problem and identify your customers’ genuine needs — at which point, optimization makes sense.

Finally, patience can help you discover what you actually want to optimize. Perhaps your checkout page is slow. It’s causing you to lose clients. You know the reason is the backend, which takes a long time to respond — so, your goal becomes, ‘to optimize the processing speed,’ right? Wrong. 

Your actual goal is ‘to keep clients on your checkout page.’ Sure, improving your website backend may support this. However, doing so could also mean a total site overhaul, paying for new hardware, and now existing customers have to get used to a new setup.

In contrast, when you realize all you want to do is keep clients on your checkout page: a small tweak — such as removing a slow, obsolete element to make the page load faster — may suffice.

See also: How to implement Artificial Intelligence in your company?

What is the optimal approach to optimization?

The optimal approach to optimization is the systematic approach. The strategy that allows you to create, to measure, then to refine. To take a few steps back and to use this new-found distance to maximize your progress when it finally comes to leaping forward.

That’s why at DLabs.AI, we follow three mantras for optimization:

1. Look before you leap (…and measure before you change)

You wouldn’t jump off a cliff without knowing the scale of the drop. Why would you optimize a product without knowing how long it could take? ‘Look before you leap’ means collecting information so that you know what lies ahead.

Get data, numbers, customer reviews, to make sure what you think is an optimization doesn’t, in fact, make a situation worse. And try to learn the scale of the problem before you commit to a fix, which will allow you to identify and resolve the right bottlenecks at an acceptable cost.

‘Measure before you change’ then gives a benchmark of where you stand so that when it comes to asking, “Have we optimized enough?” The data shows if you’ve achieved your goal, how far is left to go, or if the solution you’re working on (e.g., making your backend faster) is not the fix you need.

2. Know what to measure

Systems thinkers know that ‘what you measure is what you get’ — meaning you can only get the right outcomes if you measure the right parameters. For this one, let’s use an example. 

Say, your customers have complained about slow order confirmation. You do the smart thing: you check your metrics. Indeed, during peak traffic, the average processing time of orders is several-times slower than usual… it’s time to investigate why. 

You find it’s down to a delay with the invoicing service, as the order confirmation has to wait for this service to respond. You think, ‘why not change the flow? Not everyone needs an invoice immediately.’

You make the update, the metrics improve, problem fixed — but wait: customer complaints still flood in. The average confirmation time has decreased, so what’s the problem? 

It seems 1% of all orders still take too long, much longer than usual. You wouldn’t pick up such a marginal issue (it’s just 1 in 100 orders!) by checking the average metric. It would only be visible on tail latency.

This shows that you need to know what to measure to see the whole picture. Otherwise, you’ll end up wasting time by optimizing the wrong thing.

3. Stay smart (…or as we say, ‘Don’t poke the bear!’)

You know why there’s an issue. You know what to fix. Now, it’s time to figure out how, in the smartest way possible.

It’s often better to remove a problem entirely — instead of, say, making something ‘run faster’ — which could mean that if a database query is slow, removing obsolete data from the table once a month (if only to an archive table).

Always do your best to make any optimization as future-proof as possible by recognizing how your system might scale so that if you need to adjust something basic (like the schedule of a script), you can do so without rewriting the code.

Finally, keep in mind that generally, optimizations follow the rule of diminishing returns. Sometimes, it’s best to ignore a problem entirely. At some stage, you’ll hit a point where the cost is higher than the profit — and that’s when we say… ‘don’t poke the bear.’ 😉

The best optimization may be no optimization

As we noted at the outset, complex solutions aren’t necessarily optimal. Bigger doesn’t always mean better.

If you can avoid an update, avoid it. And recognize that less experienced team members — whether through age or a lack of domain or platform-specific expertise — tend to make things more complex than they need to be (through no fault of their own, might we add).

Work together to carry out due diligence and do your best to implement solutions that have a measurable benefit: code reviews are one way to achieve this.

We recently worked on a Python web application that had a performance problem. We used profiling to find the code responsible for the issue. As it turned out, it was a couple of lines of ‘left-over’ code from the v1 application, written by a student. 

The code was accumulating a collection of values but, instead of using a list, it did so with tuples, resulting in quadratic time complexity. All we did was modify the code using the same number of lines and the same level of complexity, but with the right structure.

In truth, the solution we created may not have been perfect, but it was good enough.

And once done, the problem was solved.

 

Looking to optimize an existing product using AI? Chat with a DLabs expert to learn if artificial intelligence is your best optimization option.

Read more on our blog