In a world where AI adoption is skyrocketing, enterprises face a critical challenge: building AI systems that are genuinely trustworthy. According to Brooke Hartley Moy, CEO and co-founder of Infactory, trust isn't just a nice-to-have—it's the essential foundation for meaningful AI implementation.
"Trust is the only thing that is really going to matter at a foundational level," Brooke emphasizes. "In order for AI to have any meaningful impact on the enterprise and, you know, honestly, society at large, people have to believe that the outputs are, one, accurate and reliable, but two, that they're based on outcomes that ultimately have net benefit as opposed to risks and dangers sort of embedded in the system."
As organizations race to incorporate AI into their workflows, understanding the critical balance between innovation speed and trustworthiness becomes essential for engineering leaders. Let's explore the key factors that determine whether your AI implementation will succeed or fail.
Increasing AI transparency helps build organizational trust
Building trust in AI systems requires pulling back the black box as much as possible. This means exposing the decision-making process and providing transparency into how results are generated. LLMs (Large Language Models) are inherently probabilistic in their responses, which creates challenges for applications where guaranteed accuracy is essential.
The black box nature of AI is somewhat by design, especially for creative applications where you want the AI to engage in free-thinking ideation. However, when organizations attempt to apply these same models to critical business functions, this opacity becomes problematic.
For engineering leaders, building trust means implementing complementary techniques beyond just the LLMs themselves. This includes establishing guardrails, ensuring proper attribution to sources, and building in explainability features that help users understand the reasoning behind AI outputs.
Why focusing on high-value data accelerates AI adoption
When it comes to AI implementation, data quality emerges as perhaps the single most crucial element. Yet many organizations get caught in the trap of thinking they need to clean and structure all their data before they can benefit from AI.
Brooke offers a refreshing perspective, emphasizing that it's not necessary to clean every piece of data in a company's corpus. She explains that organizations can leverage AI—something her team at Infactory has focused on—to determine which data is truly worth investing in and which data will best answer the most meaningful business queries.
This targeted approach to data enhancement represents a significant shift from previous data management philosophies. Rather than trying to transform all corporate data into assets, organizations can now use AI to identify the highest-value information first.
"It doesn't require nearly the heavy lifting that I think people expect," Brooke explains. "I think part of it is recognizing that there are probably a hundred different queries in a larger organization that covers 90 percent of what a business would actually want to know."
This insight highlights a fundamental truth: not all data is equally valuable. Engineering leaders need to focus on finding those "nuggets of gold" that will deliver the greatest business impact, rather than getting lost in the vast ocean of corporate information.
How matching LLMs to the right business problems maximizes impact
Many organizations have rushed to implement AI without a clear understanding of what LLMs are best suited for. According to Brooke, there's a common misconception that these tools can solve every business challenge.
"LLMs are this really magical technology. I'm fully an AI optimist, excited about what LLMs can do, but they also aren't the end all be all solution or the silver bullet I think that people have come to treat them as," she notes.
Understanding the appropriate use cases for LLMs is critical. They excel at creative tasks and content generation, helping writers overcome blank page syndrome, generating ideas, and augmenting creative workflows. The New York Times recently announced they'll be using AI to help journalists with article creation, research, and headlines, not to replace writers, but to enhance their capabilities.
For engineering leaders, this means carefully evaluating where AI can provide the most value while recognizing its limitations. High-stakes decision-making in areas like healthcare, finance, or manufacturing requires much higher standards of accuracy and reliability than LLMs can consistently provide on their own.
How precision data analysis unlocks actionable business insights
AI technologies enable a more precise approach to data analysis than what was previously possible. Brooke likens traditional data management to dealing with "a giant waste pile, like moving around the Pacific Ocean"—a potent metaphor for the overwhelming amount of information companies accumulate without knowing how to derive value from it.
The new AI-driven approach allows for scalpel-level precision to target exactly the data points that matter most for specific business queries, representing a significant shift from traditional data management approaches that attempted to process all available information. As organizations move from simpler Retrieval Augmented Generation (RAG) approaches toward more autonomous agent-based systems, the quality of underlying data becomes even more critical. An agent can only be as reliable as the information it accesses.
"We're moving into much more agentic autonomous view of the world where it's not just about search and retrieval. It's about actual action," Brooke explains. This evolution makes data quality and AI explainability even more important foundations for building user trust.
Why engineering leaders must design for end-to-end AI reliability
For engineering leaders navigating this complex landscape, Brooke recommends several key practices. First, develop a comprehensive understanding of the end-to-end process, considering all potential edge cases and failure points. This becomes especially important as systems become more autonomous.
"Something that people are sort of skipping right now that are in the early stages is really taking the process end to end and pausing to understand what are the outcomes that I'm actually driving towards and do I have each of the steps in place to allow a successful outcome to happen," she advises.
Second, engineering leaders should define a complete AI technology stack that goes beyond just implementing an LLM. This includes implementing features like guardrails, attribution systems, and explainability tools that help build trust with end users.
Finally, maintain a relentless focus on high-quality data and reliable outcomes. In many critical domains, "close enough" simply isn't sufficient, the stakes are too high for partial accuracy.
"There are so many use cases right now where 'close enough' is just not good enough," Brooke warns. "For many things... it's closer to more like 20 percent of the time you're not quite getting a high-quality answer, even if it's not a full hallucination, that's problematic."
Combining human expertise with AI creates sustainable leadership
As AI continues to evolve rapidly, engineering leaders face the challenge of building on quicksand, with a constantly shifting foundation where what was cutting-edge a few months ago quickly becomes obsolete. Success in this environment comes from maintaining focus on the fundamentals: trustworthiness, data quality, and human-centered design. Organizations that establish these practices now will be better positioned to adapt as AI capabilities continue to expand.
For engineering leaders, the most sustainable approach combines the power of AI with human expertise. Just as the New York Times uses AI to augment journalists rather than replace them, the most effective enterprise AI strategies leverage these tools to enhance human capabilities rather than attempting to remove humans from the loop entirely.
By focusing on trustworthiness first and foremost, engineering leaders can ensure their AI implementations deliver genuine value while avoiding the pitfalls of over-reliance on still-evolving technology.
Listen to Brooke Hartley Moy’s Dev Interrupted episode here: