
Baris Gultekin started his career at Google in the early 2000s. A graduate of Cornell and Stanford, Gultekin spent over a decade (across two stints) at the search giant, helping to build what would later become Google Assistant before taking up the mantle of product director in the AI space – all, that is, before founding his own blockchain startup. Involved in the development of usable AI before it garnered widespread attention, his time in the industry has – as Gultekin explains – been overwhelmingly devoted to making AI products practical for both businesses and consumers.
His latest role, as Head of AI at data and AI platform Snowflake, builds on this experience. With the enterprise AI software market, predicted to hit $391bn by 2030, according to ABI Research data, Gultekin’s role is to ensure Snowflake remains an effective operator in this growing industry. This, he explained to Tech Monitor, involves listening to customer needs, road-mapping the evolution of Snowflake’s AI products and keeping a close eye on the AI landscape to respond to the growing demand for effective AI-driven insight and choice – not so different, then, to your average forward-thinking CIO.
In the following interview, edited for length and clarity, Baris Gultekin explains how businesses are already using AI for business intelligence, what they want next, and why he isn’t threatened by the emergence of DeepSeek, despite Snowflake’s partnership with Microsoft.

You’ve worked on AI-driven products since 2006, across advertising, personal assistance, and the mobile space. Where will the next evolutions for AI products come from?
Personally, I can see three clear areas where there is developmental momentum. Firstly, AI is already democratising data. It doesn’t get the attention, but the use case for AI for many businesses is, first and foremost, data analysis. Such applications are capable of sifting through data on everything from phone calls to customer interactions and, when that is partnered with a natural language overlay, analysts can turn analysis of large amounts of that unstructured data into structured insights.
Secondly, AI agents are beginning to emerge. I’m seeing businesses build so-called ‘data agents’ capable of knitting together real-world insights and information from documents to support everything from improved interaction outcomes to marketing and business intelligence.
Finally, we’re seeing momentum in the area of business intelligence itself. Our customers, at least, want to move past rigid dashboards and into using those capable of responding to follow-up questions through a natural language interaction. This is where the latest demand is.
In the meantime, how do you manage high customer expectations about AI?
The clear thing to do [from an AI vendor or product owner’s perspective] is to manage expectations through communication. It’s important to ask about what the focus area for the AI application is, as well as ensure that any use case is well thought-through and that any system that does end up getting deployed doesn’t hallucinate but delivers high-quality outputs.
What advice would you give to businesses looking to maximise the effectiveness of their internal AI deployments?
Process and governance should be at the centre of AI management. Many businesses that we work with, for example, will have their own AI governance boards that determine who the AI vendors can be. Here, trust is critical. When an AI application is running, they have to know their data won’t be misused, that it adheres to their internal governance, and that it doesn’t deliver unwanted results or share information with the wrong parties. For example, if a customer builds an HR chatbot, it’s crucial that its access controls are set correctly so only the right people can access specific information.
All AI, of course, is built to deliver some kind of answer. However, that answer’s accuracy can be improved by grounding the AI with the right context which, in effect, is about ensuring the retrieval process works effectively: going into the right documents, scanning for the right information, and using that to deliver the right answer.
And any decision on using AI is framed around its possible return on investment [ROI]. It’s top of mind for our customers. But with the cost of AI coming down rapidly and its capabilities increasing, the trend is in the right direction from an ROI perspective.
There’s increasing buzz in the market about the potentiality of small language models. Are they likely to replace LLMs, in your opinion?
Smaller models have their advantages. Businesses that want to solve specific problems and maybe don’t want to wait as long, or spend as much money, on a large language model, can get the same performance for a focused task at a cheaper cost. This might be document extraction, classification of data, or translation. These are easy to use, widely available, smaller and cheaper and can still be used to work through large amounts of data.
On the other hand, we’re still seeing businesses interested in spending huge amounts of time and money on building extremely capable LLMs, models that are showing improvements in their ability to problem-solve every day. That reasoning ability is essential for cutting-edge business applications, so the use of LLMs, I will say, can only grow.
Earlier in our interview, you mentioned how businesses were increasingly interested in having choices when using AI. Is there more choice on the market now and is this a good thing?
Perhaps [in the Western hemisphere] we thought there were only a handful of AI providers emerging, but the emergence of DeepSeek was a surprise, not least due to its reasoning logic, capability, and price. But we should’ve known: DeepSeek has always been built on high-quality models. Of course, that emergence generated demand. Indeed, we even now host their model inside Snowflake as our customers wanted to try it and it’s open source. Of course, that means providing security guarantees.
Additionally, the emergence of DeepSeek showcases the reality of the market going forward, which will be defined by choice. And choice is good! Choice brings prices down and ensures we push the envelope of what is possible. This also creates a conundrum for customers regarding which model to pick, especially as many will want a vendor for the long term as, once you pick one model, it’s easier to keep that model up to date. Regardless, this choice-centric market will be the state of play going forward.