Companies today are in a tough position. They absolutely have to deliver a fruitful AI strategy or face losing to the competition. The same thing happened in the data revolution, and
those that rose to the occasion emerged 23x better than their competition. Unfortunately, even if enterprises manage to avoid the educational and process-oriented pitfalls listed above, the technology and security risks are almost sure to ensnare them.
Let’s assume the applications have been identified, the problems understood, the user bases selected, and the success criteria defined. From here, organizations have a few options to obtain a solution and all of them introduce technical risk.
Nearly every technology provider has added the letters “AI” to the first 3 words that appear on their homepage. Very few have done anything novel, or really at all, with said AI.
This includes some of the innovative companies that come to mind when you think of Big Tech. The truth is, they were caught just as off guard as you were.
They certainly haven’t been asleep at the wheel since, however. New AI assistants, copilots, enablers, tools, and gadgets galore have made their way into every aspect of their offerings or will be in short order. What do they all have in common?
They’re going to lock you into their models, lock you into their applications, and lock you into their platforms. Their innovation in AI only serves to drive usage, consumption, reliance, and lock-in to their existing business models.
But what if in a year there’s a new latest-greatest model you want to switch to (like there currently is every week)? What if you want to ensure cross-compatibility with another model or platform? What if you want the technology to work with the internal model that you finally got off the ground?
How do you remain model agnostic in such a rapidly changing world? Organizations are trading quick solutions for a new form of lock-in: model lock-in.
Most are totally unaware of this fact.
What’s more, Big Tech have no incentive to revolutionize work if the revolution comes at the expense of their saturation in your org. But that’s where the true alpha lies—in reimagining business processes and freeing us from these technological encumbrances. This is the classic Innovator’s Dilemma.
While incremental improvements and AI copilots will proliferate, largely led by Big Tech, real gains will not be served by your existing tools. Prudent decision makers will partner with revolutionary solution providers or will have to build their solutions themselves.
On the other hand, the revolutionary solutions we are seeing spin up from newer upstarts serve for fantastic demos but are frankly often ill suited for modern enterprise use. The reason is that these teams are inexperienced in building scalable and secure solutions for today’s enterprise requirements and guidelines. We outline many of the
security issues baked into such products here. Reach out if you would like our overview of security best practices when building with LLMs.
A viable option is to build the solution internally, but it can be fraught with peril without the correct expertise.
Which LLM model do you use? Are you orchestrating multiple? With what framework? Should you host models yourself? Are you building to avoid model and platform lock-in or do you believe OpenAI Enterprise will really always be the best option going forward? Are you prepared to face AI regulation on top of data governance? Can you safely have the model make internal API calls in an auditable way? How will you think about security? How will you keep costs down?
We work with enterprise partners to address these and the host of other landmines that riddle this landscape.
Most are just beginning to tackle these questions, but they all are of major consequence. Take, for example, security. All of the frameworks your team is likely evaluating contain critical vulnerabilities as
we discuss here. Its not only the model that can jeopardize your security and compliance.