This is some text inside of a div block.
Newsroom

Joining Forces with Bulutistan: AI and Cloud Power in Retail

We joined Bulutistan’s event on AI and cloud in retail, where our CRO Vorga Can showed how Dot is making an impact.

September 26, 2025
Read more

Recently, we had the opportunity to take part in Bulutistan's event, “Yapay Zekâ ve Bulut Gücüyle Perakendede Yeni Dönem.”

Bulutistan, a leader in GPU and cloud infrastructure, brought together an inspiring day filled with discussions on how artificial intelligence and cloud power are shaping the future of retail. It was a privilege to share the stage and contribute to a conversation that is so critical for one of the world’s most dynamic industries.

At the event, our CRO Vorga Can walked through how Dot is already creating tangible impact in retail. From streamlining operations to building personalized shopping experiences, Dot enables retailers to make smarter decisions and serve their customers more effectively. We believe AI is not just an innovation layer in retail, but a foundation that redefines how businesses operate, scale, and connect with their customers.

We sincerely thank Bulutistan for the warm welcome and for hosting such a thoughtful gathering. This is only the beginning, you’ll be hearing much more soon about our journey with Bulutistan!

This is some text inside of a div block.
AI Hub

Generative Optimization: Less Effort, More Output

Why is generative engine optimization the smarter path for enterprises? Lower costs, faster rollout, sharper results.

September 25, 2025
Read more

Artificial intelligence has rapidly evolved into a cornerstone of modern enterprises. From natural language processing to predictive analytics, businesses are racing to harness AI’s potential. Yet, as models grow larger and more complex, organizations face a pressing question: how can we get more out of AI without drowning in costs and inefficiencies?

The answer lies in generative engine optimization, a strategy that emphasizes efficiency, smart alignment, and contextual precision over brute-force scaling. Instead of asking “how big can the model get?”, the new question becomes: “how much more value can we extract with less effort?”

In this article, we’ll explore what generative engine optimization is, why it matters, how it works across industries, and how it ties into the broader debate around foundation models. By the end, you’ll see why this approach represents the future of enterprise AI.

What Is Generative Engine Optimization?

Generative engine optimization (GEO) refers to refining how AI models generate outputs by optimizing the inputs, prompts, and workflows that fuel them. It’s not about buying more GPUs or building endlessly larger models. Instead, it’s about smarter engineering and orchestration that makes existing systems work harder, better, and faster.

Think of it like tuning a race car. You could buy a bigger engine, but unless the tires, aerodynamics, and fuel system are optimized, the car won’t reach peak performance. GEO applies the same principle to AI.

The three central pillars are:

  • Quantity: Providing sufficient training examples without overwhelming the system with redundancy.
  • Quality: Removing irrelevant, noisy, or contradictory data.
  • Context: Aligning datasets and prompts with the specific environment, industry, or workflow.

By balancing these pillars, organizations can build AI systems that achieve higher accuracy and efficiency — while using fewer resources.

Why Enterprises Need Generative Engine Optimization

Enterprises often find themselves at a crossroads with AI adoption. On one hand, there is pressure to adopt state-of-the-art foundation models. On the other, there is the reality of limited budgets, regulatory compliance, and operational constraints. Generative engine optimization bridges that gap.

Here’s why GEO matters:

  1. Cost Efficiency
    Running massive foundation models on raw infrastructure can burn through budgets. GEO lowers the computational footprint, reducing cloud and hardware expenses.
  2. Speed to Deployment
    Optimized workflows mean enterprises don’t need to spend months fine-tuning. GEO accelerates deployment by making AI production-ready faster.
  3. Customization Without Complexity
    Enterprises in niche industries — like healthcare diagnostics or legal compliance — need specialized outputs. GEO allows them to tailor results without retraining from scratch.
  4. Reduced Hallucinations
    By cleaning up data pipelines and refining prompts, GEO minimizes one of AI’s biggest flaws: making things up.
  5. Scalability
    Optimization ensures systems grow sustainably. Instead of scaling costs linearly with use, GEO allows AI to handle more tasks with the same resources.

How Generative Engine Optimization Works

The mechanics of GEO can be broken down into three practical levers.

1. Data Engineering

Raw data is rarely model-ready. GEO emphasizes building structured, domain-specific datasets. For example, a hospital using AI to analyze medical records must ensure privacy compliance while also feeding the model with standardized terminologies like ICD codes. Clean, domain-aligned datasets dramatically boost performance.

2. Prompt Strategies

Prompts are the steering wheel of generative AI. Poorly designed prompts lead to inconsistent, vague, or inaccurate answers. GEO promotes context-rich prompting techniques such as:

  • Chain-of-thought prompting: guiding models through reasoning steps.
  • Role-based prompting: framing the model as a domain expert (e.g., “You are a financial advisor specializing in SMEs”).
  • Instruction tuning: standardizing the way prompts are structured across workflows.

3. Workflow Orchestration

The most advanced GEO implementations use multi-agent systems where different agents collaborate to solve tasks. For example:

  • A router agent directs queries.
  • A supervisor agent checks quality and relevance.
  • A task-specific agent handles domain expertise.

By breaking tasks into smaller, specialized processes, enterprises achieve higher reliability and scalability.

Industry Applications of Generative Engine Optimization

GEO is not just a theoretical concept. It is actively reshaping industries where efficiency, compliance, and precision are non-negotiable.

Finance: Smarter Risk Assessment

Banks often rely on massive datasets to evaluate loan applications. Traditional models might require retraining to adjust for new regulations or customer profiles. With GEO, financial institutions can refine prompts and workflows to instantly adapt, lowering risks of bias while speeding up decision-making.

For example, a small business applying for a loan can be evaluated with a GEO-optimized system that pulls in regulatory context, verifies financial documents, and generates clear, audit-ready reasoning for approval or denial.

Healthcare: Precision Diagnostics

Medical AI systems face the dual challenge of accuracy and compliance. A GEO-based approach allows healthcare providers to optimize diagnostic models by feeding them with carefully curated patient records, anonymized scans, and verified medical literature. This reduces hallucinations and improves trust in life-critical decisions.

Imagine a radiologist using an AI assistant that doesn’t just label an image but explains its reasoning step by step, citing relevant medical studies. That’s GEO in action.

Retail & E-Commerce: Personalized Experiences

Retailers use AI for recommendations, inventory planning, and customer service. Instead of retraining a massive model whenever consumer trends shift, GEO enables businesses to refine workflows on the fly. For instance, AI shopping assistants can tailor product recommendations by combining customer history with live market data generating conversations that feel both personal and efficient.

The Connection to Foundation Models

Foundation models are powerful, but they are not flawless. They excel in generalization but often stumble in domain-specific contexts. As discussed in The Truth About Foundation Models, the pursuit of ever-larger models comes with trade-offs: environmental impact, interpretability issues, and diminishing returns.

Generative engine optimization complements foundation models rather than competing with them. GEO acts as the bridge between general-purpose intelligence and enterprise-specific needs. Think of foundation models as the “raw clay” and GEO as the sculptor that shapes them into useful tools.

Case Study: A Manufacturing Example

Consider a global manufacturer struggling with supply chain optimization. Their legacy AI system relied on RPA (Robotic Process Automation), which could speed up repetitive tasks but lacked contextual understanding. By adopting GEO, the company integrated:

  • Structured supplier datasets.
  • Prompts fine-tuned for logistics language.
  • Multi-agent orchestration for forecasting and anomaly detection.

The result? Supply chain predictions that were 30% more accurate while reducing compute costs by 25%. GEO not only improved outcomes but also delivered measurable ROI.

The Future of Generative Engine Optimization

Looking ahead, GEO is set to evolve along three major trajectories:

  1. Integration with Agentic AI
    Enterprises will adopt agent-based orchestration where multiple specialized agents cooperate, each optimized for specific tasks.
  2. Real-Time Feedback Loops
    Models will continuously refine themselves based on user interactions, optimizing performance dynamically.
  3. Sustainability as a Core Metric
    As concerns about AI’s carbon footprint grow, optimization will no longer be optional. GEO will become the key to making AI environmentally viable.

This shift represents a broader change in AI strategy: from endless scaling to purposeful efficiency.

Conclusion: The Path Forward

The future of enterprise AI isn’t about bigger models or more compute power. It’s about generative engine optimization, making every piece of the system work smarter, not harder. From finance to healthcare to retail, GEO ensures that AI doesn’t just scale, it scales responsibly, efficiently, and sustainably.

Organizations that embrace this mindset will not only reduce costs and increase accuracy but will also set themselves apart in the competitive AI landscape. The winners won’t be those with the biggest models, but those who master the art of less effort, more output.

Frequently Asked Questions

How is generative engine optimization different from fine-tuning?
Fine-tuning adapts a model to specific datasets, but GEO takes a holistic approach — optimizing data pipelines, prompts, and workflows together.

Can small companies benefit from generative engine optimization?
Absolutely. In fact, SMEs often lack resources for large-scale retraining, so GEO gives them enterprise-level performance without enterprise-level costs.

Is generative engine optimization a replacement for foundation models?
No. It complements them. Foundation models provide raw intelligence, while GEO ensures they’re tailored, efficient, and reliable in enterprise environments.

This is some text inside of a div block.
Novus Voices

Product & Design Meetups: How Can Two Tightrope Walkers Share The Same Rope?

See how Novus builds Dot: Product & Design in sync, AI tools in workflow, and communication at the heart of product making.

September 23, 2025
Read more

Hello everyone. On September 5 we host a very lively Product & Design Talks meetup. We meet peers from the industry and share how, at Novus, we build an AI product by keeping Product and Design shoulder to shoulder. We explain how we use AI tools in our workflow, what challenges we face, and how we manage communication throughout. This post serves as a tidy recap for those who cannot attend and a handy reference for those who do. At Novus, we keep communication open and sincere, and we treat the topic seriously. In the age of AI, we aim to lock in the right team rhythm and turn it into a continuous and measurable practice.

What Is Dot? What Are We Building?

Before anything else, we explain what we build as an AI product. Our flagship is Dot, an agentic AI framework. Dot runs multi model and multi agent architectures and focuses on orchestration. In practice, Dot brings dozens of models, tools, and integrations together under one intelligence backbone and routes each task to the best capacity.

This backbone stands on three legs:

  • Autonomous Model Optimization makes real time decisions across the cost quality speed triangle and routes different LLMs and tools to the right context.
  • Supervisor AI Agents control the workflow, manage decision points, step in when things go off path, and keep an auditable decision log.
  • Chain of Thought and Environment Configuration preserve reasoning traces and the execution environment so work stays reproducible. As a result, we orchestrate many intelligences with a single integration, speed up our learning by doing a loop, and tie outcomes to measurable metrics in the field. Dot also runs in cloud, on prem, and hybrid environments.

Balance: How Product & Design Work Day to Day

We prefer Kanban over fixed sprints so we adapt to a fast moving AI world. We run our flow along Discovery to Alignment to Validation and keep Product and Design in constant handoff.

In Discovery, we frame the problem together with the business goal and define success metrics early. We run benchmarks, user interviews, and market and competitor scans. We surface assumptions, map constraints and opportunities, and shape the first PRD draft, user flow skeletons, and the measurement plan as our single source of truth.

As needs get clearer, we analyze and prioritize. We phase the scope and record decisions transparently on the roadmap. On the Product side, we deepen the PRD. On the Design side, we advance UX flows, interaction logic, and visual language from the same shared context. We validate risky assumptions early with clickable prototypes. Handover is not a one way file toss. We keep a two way dialogue enriched with prototypes, usage scenarios, and accessibility notes. After the Design handover, we get final designs and a ready to use prototype. We keep updating the PRD, decompose the work into small and tractable packages, and move into grooming. Because information and feedback flow well, grooming acts more like a kickoff than a debate. With development handover, we set the path to production, and the process does not end there.

In Validation, we run usability tests, A slash B experiments, and product analytics such as events, funnels, and retention. We feed results back into the backlog. Because we define success thresholds upfront, we decide based on data which features we keep, and we iterate or shelve what does not work.

Tools That Build the Builders: How AI Shapes Our Workflow

We build AI products, and we let AI tools shape how we work. We actively use Dot in our own kitchen. PRD Agent converts the problem, goals, scope, acceptance criteria, and measurement plan into a clean PRD by using past work and shared context. We version it and keep it as the single source of truth.Wireframe to Prototype Code Agent turns simple sketches and interaction notes into a working prototype, for example clickable Next.js components, so we test risky flows the same day.The Figma to PRD MCP bridge cross checks design decisions with requirements and automatically details the PRD based on diffs, including empty states, error messages, and accessibility.With Jira Agent through MCP, we generate epics, stories, and sub tasks from the PRD, set labels, priorities, and dependencies, and keep two way sync as things change.In production, Analytics Companion gathers telemetry and product analytics, proposes experiments, runs impact analysis, and points to the next iteration.

End result: our write, draw and ship loop accelerates while quality gates such as reviews, tests, and measurement trigger automatically.

Sharp Turns Ahead: The Realities of AI

The AI landscape moves fast. Norms are still forming. That speed is both a curse and a gift. We shorten the validation window with early prototypes and controlled experiments. We package the same core tech for different personas and industries and keep design decisions reusable, the architecture modular, and the positioning crisp. We keep the roadmap alive. We phase work by weighing value, effort, and risk, make changes visible, and share them across the company. Our roadmap is not a sacred manifesto. It is a living organism. Above all, we measure before we ship. We track feature performance, conversion, and retention closely, and we treat analytics and user feedback as the fuel of iteration.

We Communicate, Therefore We Ship

We repeat a few words often, by design. Clear communication keeps the system smooth and the chaos low. We maintain cross functional alignment so Product, Design, Engineering, and other teams move to the same rhythm, with agendas, decisions, and dependencies written, accessible, and transparent. With a single source of truth, we version PRDs, design files, flow charts, and metrics in one place so everyone points to the same reference. Product also centralizes incoming feedback, ideas, and suggestions, filters them, and makes them consumable. With a culture of continuous feedback, not only user tests but also internal comments and critiques flow into the backlog through regular rituals. Meeting hygiene and asynchronous habits favor written clarity. Meetings are decision oriented, and notes stay traceable and repeatable. Everyone has a voice. When needed, we prioritize and phase ideas, not just features.

Quick Wrap Up

Success in AI products is less about which model we use and more about the experience we deliver for the right user, in the right moment, with the right context. With Dot orchestration, Product and Design pass the ball faster, and with measurement and automation, we nurture a culture that learns continuously. That culture helps us build Dot on a stronger and more forward looking foundation. We keep communication steady, prioritize ideas and data, and treat not only the product but also product development itself as a living system. Our strongest muscle is not just processes or methodology enhanced by AI, it is our collective communication. We communicate, therefore we ship.

This is some text inside of a div block.
AI Hub

Vibe Coding: Let the AI Write While You Vibe

Vibe coding transforms development by letting AI generate code while teams focus on creativity, strategy, and faster delivery.

September 22, 2025
Read more

Coding has always been described as a highly structured process. Developers sit at their desks, carefully writing lines of logic, debugging syntax errors, and testing outputs. While that process is still alive and well, artificial intelligence is reshaping how we think about programming. A new concept has entered the scene: vibe coding.

Vibe coding is not about replacing developers but about changing the relationship between humans and machines. Instead of typing every command, developers can simply describe what they want in natural language while the AI generates the code in real time. It creates a flow where creativity and logic meet, and the human role shifts toward guiding, reviewing, and fine-tuning rather than building everything from scratch.

The phrase “let the AI write while you vibe” captures the essence of this shift. Developers focus on the big picture, thinking about how applications should behave, while the AI handles the heavy lifting. It is coding that feels less like manual labor and more like creative direction.

What Makes Vibe Coding Different

Traditional coding requires close attention to detail, from variable names to function definitions. Vibe coding changes the process by abstracting those details away. A developer might say, “Build me a login page with email and password authentication,” and within seconds, the AI produces a working prototype.

This shift offers several key differences:

  • Speed: AI generates lines of code in seconds, cutting development cycles dramatically.
  • Accessibility: Non-technical users can participate in software development by explaining needs in plain language.
  • Focus on design: Developers can spend more time considering user experience and business logic rather than syntax.
  • Collaboration: Teams can brainstorm features conversationally, while the AI handles implementation.

The result is a workflow where humans set the vision, and the AI accelerates execution. That is why the word “vibe” fits so well. Instead of grinding through repetitive tasks, developers can move into a creative zone, testing ideas and experimenting without fear of wasting hours of effort.

The concept is also expanding beyond individuals. Entire teams are adopting vibe coding as part of their workflows, integrating it with project management, design, and testing pipelines. The goal is not to replace engineers but to make them faster, more versatile, and more imaginative.

Benefits and Use Cases of Vibe Coding

Every new paradigm in technology must prove its value in the real world. Vibe coding is already showing promise across different industries, not just in hobby projects but also in enterprise environments.

Key Benefits

  1. Rapid prototyping
    Startups and enterprises can move from idea to prototype within hours. Instead of building minimum viable products manually, AI handles the repetitive coding, allowing humans to test and refine concepts more quickly.
  2. Lowering the barrier to entry
    For entrepreneurs without technical backgrounds, vibe coding provides a way to launch digital products without hiring full development teams.
  3. Enhanced productivity for engineers
    Developers no longer need to reinvent the wheel. By offloading repetitive tasks like writing boilerplate code, they can spend more energy on solving unique problems.
  4. Creative exploration
    With AI as a coding partner, teams can try new ideas with little risk. If one approach does not work, they can pivot instantly.
  5. Integration with business systems
    Vibe coding can be tied directly to existing systems, such as CRMs, ERPs, or analytics platforms. This opens the door for faster automation inside organizations.

Real-World Use Cases

  • Web development: Designing landing pages, forms, and dashboards with natural language instructions.
  • Data science: Asking the AI to clean datasets, generate charts, or run analyses without writing every function.
  • Mobile applications: Creating prototypes of apps with standard features like authentication, chat, or geolocation.
  • Business workflows: Automating repetitive internal tasks such as report generation or CRM updates.

One notable example is how vibe coding intersects with customer relationship management. Companies now rely on AI to connect sales conversations directly into their CRMs, helping turn leads into conversions. For more on this application, check our related article: Best AI System for CRM: Turning Conversations into Conversions.

The Challenges of Vibe Coding

Like any new technology, vibe coding is not without its difficulties. While the idea of “AI writes, you vibe” is appealing, reality demands careful consideration.

  • Quality control: AI-generated code can work but may not follow best practices or long-term maintainability standards. Human review is always required.
  • Security risks: AI systems may unintentionally generate insecure code if not trained or monitored properly.
  • Overreliance: New developers might lean too heavily on AI, skipping the learning process of understanding core programming principles.
  • Customization limits: AI excels at common patterns but may struggle with highly specialized or novel requirements.
  • Organizational fit: Large enterprises must adapt workflows and compliance processes to accommodate AI-driven development.

These challenges highlight why vibe coding should be seen as a complement, not a replacement, for human expertise. Skilled developers are still necessary to guide, validate, and ensure that outputs align with business goals.

Addressing the Challenges

The good news is that most of these issues are solvable:

  1. Human-in-the-loop review ensures that every piece of generated code passes quality checks.
  2. Security audits can be automated to catch vulnerabilities early.
  3. Training and education help teams balance reliance on AI with deeper technical understanding.
  4. Governance frameworks provide rules for when and how AI coding tools should be used in enterprise contexts.

As these practices mature, vibe coding will only grow stronger as a reliable methodology.

The Future of Vibe Coding

Looking ahead, vibe coding is set to become more than a novelty. It has the potential to redefine how development teams and organizations approach software creation. Several trends are already emerging:

  • Deeper integration with IDEs: Vibe coding assistants will become standard features in developer tools, offering real-time support.
  • Multi-modal instructions: Developers may soon guide AI with not just text but also voice, sketches, or diagrams.
  • Team collaboration: Entire teams could “talk” to the coding AI in a shared space, merging project management and development.
  • Continuous learning systems: AI will improve its code generation by learning from previous company projects, creating customized style and performance standards.
  • Business-wide adoption: Non-technical teams, such as marketing or HR, will use vibe coding principles to build workflows without traditional developers.

The larger picture shows vibe coding as part of a democratization movement in software. Coding is no longer only for specialists; it is becoming a shared capability across organizations. The role of developers will evolve into architects, reviewers, and innovators, while AI handles the execution.

For enterprises, this shift could translate into faster product cycles, reduced costs, and greater adaptability. For individuals, it creates opportunities to experiment and build with minimal barriers. And for the AI industry, it marks the next stage of collaboration between human intention and machine execution.

Conclusion

Vibe coding represents a bold reimagining of software development. By allowing AI to generate code while humans guide and refine, it opens the door to faster innovation, wider participation, and more creative workflows. While challenges exist — from quality control to organizational fit — the trajectory is clear: vibe coding is not a passing trend but a glimpse into the future of programming.

Letting the AI write while you vibe is not about doing less work but about working differently. It allows developers to move into a creative mindset, focusing on what matters most while delegating the rest. As tools and practices mature, vibe coding will stand alongside traditional programming as a cornerstone of modern development.

For organizations willing to embrace this new approach, the rewards will be substantial: efficiency, innovation, and the chance to transform ideas into working products faster than ever before. The next wave of coding is already here, and it is one that invites everyone to take part.

Frequently Asked Questions

What is vibe coding?
Vibe coding is an approach where developers describe what they want in natural language, and AI generates the code, allowing faster and more creative workflows.

Does vibe coding replace traditional developers?
No. Vibe coding complements developers by handling repetitive tasks while humans focus on quality, customization, and strategic direction.

How can businesses benefit from vibe coding?
Businesses can accelerate prototyping, reduce development costs, and make coding more accessible to non-technical teams, improving overall agility.

This is some text inside of a div block.
Newsroom

Bringing AI Solutions to “Üretimde Yapay Zeka Sahnesi”

Our team joined Zorlu Holding’s “Üretimde Yapay Zeka Sahnesi” event to showcase AI solutions and connect with industry leaders.

September 19, 2025
Read more

Participating in events with our team is always a priority for us, and yesterday we had the pleasure of joining Zorlu Holding’s “Üretimde Yapay Zeka Sahnesi” event. It was a truly valuable experience to be part of this gathering focused on the role of artificial intelligence in manufacturing.

Our Sr. Product Manager, Hüseyin Umut Dokuzelma, took the stage to present the AI solutions we’ve developed for manufacturing use cases. His engaging delivery and energy made the session both informative and enjoyable, while the strong interest shown in Novus afterward was especially rewarding.

Alongside the presentation, our Sr. Marketing Specialist, Doğa Su Korkut, and Head of Sales, Ahmet Sercan Ergün, spent the day actively networking and building new connections. Our co-founders, Vorga Can and Rıza Egehan Asad, also joined during the networking session, making it a meaningful moment to be together as a team.

We sincerely thank Zorlu Holding for the kind invitation. Taking part in this event was a great opportunity to contribute, share our solutions, and engage with the community throughout the day.

The Novus Team at the Zorlu Holding Event
The Novus Team at the Zorlu Holding Event

This is some text inside of a div block.
AI Hub

The Missing Link Between AI Agents and Users: AG-UI Protocol

What if AI agents could truly connect with users making interactions smoother, faster, and more human in real time?

September 18, 2025
Read more

In today’s world, we frequently hear about AI agents — and we’ll continue to hear more asthey evolve. These agents are no longer just standalone models; they’ve becomesystems that can communicate with other tools and collaborate effectively.

This is where protocols come into play, enabling agents to “speak the same language.”For example:

- MCP (Model Context Protocol): Gave agents access to external tools.
- A2A (Agent-to-Agent): Enabled agents to talk to one another.

Thanks to these protocols, AI agents have transformed into stronger, more grounded units of work.

But if youlook closely, within this ecosystem agents are still silent helpers — running automation in the background without directly engaging with users.

And thisis where a new protocol steps in: one that bridges backend agents with front-end applications. AG-UI!

The Agent Protocol Stack
The Agent Protocol Stack

What is AG-UI?

AG-UI is a protocol that standardizes the way AI agents connect with user applications. Youcan think of it as a universal translator: no matter what framework is running in the background, AG-UI enables AI-powered systems to communicate with front-end applications in real time.

How Does AG-UI Work?

AG-UI standardizes the connection between AI agents and front-end applications through event-based communication. In other words, everything that happens between the agent and the frontend flows as small, meaningful “events.” This makes the interaction both real-time and structured.

There are 16 event types grouped into 5 categories, enabling smart, synchronized communication between the agent and the UI:

  • Lifecycle Events: Track which stage the agent is in (e.g., started, in progress, completed).
  • Text Message Events: LLM-generated text streams in token by token. Thanks to these events, the UI can display the response as it’s being written.
  • Tool Call Events: Triggered when the agent calls an API or runs a function. The UI can display the process or even request user approval.
  • State Management Events: Keep the UI updated step by step as the agent generates plans, tables, or code.
  • Special Events: Designed for advanced, custom functionality such as notifications tied to a specific integration.

Each message follows a clearly defined JSON format with consistent structure—perfect for building dynamic UIs. Some examples include:

  • TEXT_MESSAGE_CONTENT
  • TOOL_CALL_START
  • TOOL_RESULT
  • STATE_DELTA
  • USER_EVENT

These JSON-based event streams are sent via a single HTTP POST request to the agent endpoint. The frontend can instantly react to them whether they’re messages, tool calls, or state updates. This creates seamless real-time synchronization between frontend and backend in a single standard format.

Agent User Interaction Protocol
Agent User Interaction Protocol

Why Do We Need the AG-UI Protocol?

The greatest advantage of AG-UI is that it brings AI agents and users together in real-time, interactive experiences. Technically, however, building such agents is challenging. Some of the main difficulties include:

  • Real-time streaming: LLM outputs arrive piece by piece (token by token). The UI must be able to display them instantly.
  • Tool orchestration: Agents execute code and call APIs. The UI should visualize this process and, when necessary, request user approval.
  • Shared state: Agents produce tables, plans, or code that evolve step by step. Continuously sending the entire dataset is inefficient—only the differences (diffs) should be transmitted.
  • Concurrency & cancellation: Users may start multiple queries simultaneously and cancel one at any time. A clean management system (e.g., thread/run IDs) is essential for synchronization between backend and UI.
  • Different frameworks: With ecosystems like LangChain, CrewAI, and Mastra lacking a common standard, each UI must build its own adapter.

AG-UI solves all of these challenges. It enables dynamic, always-up-to-date user interfaces, seamless data synchronization, workflows that include user input, and tool calls triggered directly from the interface.

In short, AG-UI unlocks the full power of backend AI agents and delivers it right into products—giving users smoother, more collaborative experiences.

Lets Wrap It Up

The new generation of AI applications is moving beyond standalone systems that simply “give answers.” Instead, they are becoming co-creative partners that collaborate with users. Real-time interactivity, live state streaming, instant feedback, and shareable states—all of these are now within reach, unified under a single language and protocol.

If your next product is going to be agent-powered, AG-UI provides the perfect foundation to make the experience consistent, interactive, and truly real-time.

This is some text inside of a div block.
Newsroom

Youth, Entrepreneurship, and Leadership Take the Stage in Gaziantep

Our CRO Vorga Can joined the Youth Engagement Summit in Gaziantep to talk entrepreneurship, leadership, and AI with young talents.

September 12, 2025
Read more

Last Saturday marked a memorable moment in Gaziantep, where our CRO, Vorga Can, was invited as a speaker for the “Gençlik, Girişimcilik ve Liderlik: Yeni Nesil İş Becerileri” panel. The session was part of the Youth Engagement Summit, organized by Habitat Derneği and UNICEF Türkiye Milli Komitesi.

The panel brought together a room full of bright, curious, and driven young people from across the country. Discussions centered around entrepreneurship, leadership, sustainability, artificial intelligence, and the constantly evolving digital landscape. The energy in the room was fueled by questions, stories, and the unique perspectives of each participant.

Reflecting on the event, Vorga noted the importance of continuing to push boundaries and challenge ourselves. With the next generation rising fast, the responsibility of today’s leaders is not only to guide but also to keep raising the bar, ensuring the path ahead is shaped by innovation, resilience, and collaboration.

We sincerely thank Habitat Derneği and UNICEF Türkiye for creating such an impactful platform. Being part of this initiative was both valuable and inspiring, and we look forward to continuing to support opportunities that empower young people.

This is some text inside of a div block.
AI Hub

The Truth About Foundation Models

Are foundation models the future of AI for enterprises? Here’s the truth: powerful, yes — but incomplete without optimization.

September 9, 2025
Read more

Artificial intelligence has entered an era where a few massive systems dominate the landscape. These are called foundation models — large-scale AI models trained on enormous datasets that serve as the basis for many downstream applications. From natural language processing to computer vision, foundation models act as the scaffolding on which new AI solutions are built.

But as enterprises rush to adopt them, critical questions arise. Are foundation models the best long-term strategy? What are their trade-offs? And how do they connect with more efficient approaches like Generative Optimization: Less Effort, More Output?

This blog takes a deep dive into the truth about foundation models: their power, their pitfalls, and their future in enterprise AI.

What Are Foundation Models?

Foundation models are large, pre-trained systems designed to perform a wide variety of tasks. Instead of building a new model from scratch for every application, companies can leverage foundation models as a base and adapt them through fine-tuning or optimization.

They are called “foundation” because they provide the groundwork for everything built on top. Just as a strong building foundation determines the stability of a skyscraper, foundation models shape the reliability of AI applications.

Common examples include large language models (LLMs) like GPT, multimodal systems that handle both text and images, and specialized models used in scientific research.

For enterprises, the appeal is obvious: a single system that can support multiple use cases, from customer service bots to advanced data analytics.

Why Enterprises Adopt Foundation Models

The surge of interest in foundation models comes from three major factors:

  1. Versatility
    A foundation model can be applied across tasks without retraining from zero. This flexibility is appealing to companies that want broad AI capability.
  2. Performance
    Foundation models achieve state-of-the-art results in many benchmarks, proving their strength in language understanding, vision recognition, and reasoning.
  3. Time Savings
    Instead of investing months into building a narrow AI system, enterprises can integrate foundation models and start testing use cases within weeks.

This combination of power and convenience has made foundation models the “default” starting point for modern AI strategies.

The Downsides of Foundation Models

While their benefits are undeniable, foundation models also come with serious challenges that enterprises cannot ignore.

1. High Costs

Training and deploying foundation models requires massive compute resources. Cloud usage bills can skyrocket, especially if enterprises rely on them for continuous, large-scale tasks.

2. Limited Customization

Even though they are versatile, foundation models are not tailored to specific industries out of the box. Fine-tuning is often required, which adds complexity and expense.

3. Hallucinations

A well-known flaw of foundation models is their tendency to produce false or misleading outputs. In sectors like healthcare or finance, this can be catastrophic.

4. Opaque Decision-Making

Foundation models are black boxes. Their reasoning processes are difficult to explain, making compliance and accountability a problem for regulated industries.

5. Environmental Impact

Training massive models consumes enormous amounts of energy. As sustainability becomes a business priority, the carbon footprint of foundation models cannot be overlooked.

The Scale Debate: Bigger Isn’t Always Better

For years, the AI community operated under a simple assumption: scaling up model size and training data leads to better performance. And to a degree, this is true — larger foundation models often outperform smaller ones.

But research and real-world use cases show a limit to this logic. Beyond a certain point, scaling leads to diminishing returns. The cost of training doubles or triples, while the accuracy gains shrink.

This is why enterprises are beginning to explore alternatives like generative engine optimization, which focuses on making models more efficient rather than simply larger. As discussed in Generative Optimization: Less Effort, More Output, efficiency may matter more than sheer size in the long run.

Foundation Models in Practice: Industry Use Cases

Healthcare

Hospitals use foundation models to analyze medical texts, generate diagnostic notes, or power clinical decision-support systems. While useful, hallucinations remain a barrier to adoption in high-stakes environments.

Finance

Banks experiment with foundation models for fraud detection, risk analysis, and customer support. However, regulatory compliance requires explainability, something foundation models struggle with.

Retail

Retailers use them for product recommendations, chatbots, and trend analysis. Yet, without optimization, outputs can feel generic and fail to capture brand-specific needs.

Manufacturing

Foundation models support predictive maintenance and supply chain insights. Still, they need integration with specialized workflows for reliable performance.

Across all industries, the theme is the same: foundation models are powerful but incomplete. They require optimization and orchestration to deliver consistent enterprise value.

The Hidden Truth: Foundation Models Need Optimization

The truth about foundation models is simple: they are a starting point, not a complete solution. Enterprises that rely solely on them often face scalability issues, compliance risks, and unsustainable costs.

This is where optimization enters the picture. By refining workflows, engineering prompts, and curating domain-specific datasets, businesses can amplify the value of foundation models without paying for endless scaling.

As highlighted in Generative Optimization: Less Effort, More Output, optimization offers a path forward that emphasizes efficiency, accuracy, and sustainability.

Case Study: Foundation Models in Customer Support

A global telecom company adopted a foundation model to power its customer service chatbot. Initial results were impressive: response times dropped by 40%, and customers reported improved satisfaction.

But cracks soon appeared. The chatbot occasionally gave wrong billing information, raising compliance concerns. It also generated high cloud costs due to constant usage.

The company introduced optimization techniques:

  • Curated customer service scripts for training.
  • Implemented prompt templates to reduce hallucinations.
  • Integrated an orchestration system that routed complex cases to human agents.

The result? Costs dropped by 25%, accuracy improved significantly, and compliance risks were reduced.

This case illustrates the reality: foundation models are powerful, but they must be optimized to work effectively in enterprise environments.

The Future of Foundation Models

Where are foundation models headed? Three major trends stand out:

  1. Smaller, Specialized Models
    Instead of one giant system, we’ll see leaner models specialized for industries or workflows.
  2. Hybrid Approaches
    Enterprises will combine foundation models with optimization layers, orchestration systems, and smaller agents.
  3. Greater Regulation
    Governments are introducing AI regulations that emphasize transparency and accountability. Foundation models will need to evolve to meet these standards.

Ultimately, the future will not belong to the biggest models, but to those that combine foundation strength with smart optimization.

Conclusion: The Balanced Path

The truth about foundation models is that they are powerful but imperfect. They offer enterprises a strong starting point, but not a complete solution. Without optimization, they risk being too costly, too opaque, and too generic.

The smarter path forward lies in balance: using foundation models as a base while applying strategies like Generative Optimization: Less Effort, More Output to maximize efficiency and accuracy.

For enterprises, this means looking beyond the hype and asking a simple question: how can we achieve less effort, more output?

Frequently Asked Questions

Are foundation models always necessary for enterprise AI?
Not always. While they provide a strong base, smaller specialized models can outperform foundation models in narrow domains.

How can enterprises control the cost of foundation models?
By combining them with optimization strategies that reduce compute demand and streamline workflows.

Will foundation models remain dominant in the AI landscape?
Yes, but their dominance will be reshaped. Enterprises will increasingly focus on blending foundation models with efficient optimization.

This is some text inside of a div block.
Novus Meetups

Novus Meetups: Product & Design Talks

On Friday, our Sr. Product Manager and Head of Design showed how product and design truly work together to shape AI products!

September 8, 2025
Read more

September felt like the right time to bring our Novus Meetups series back, and we couldn’t have asked for a better way to start. Last Friday, with the support of QNBEYOND, we hosted “Novus Meetups: Product & Design Talks.” It was an afternoon filled with insights, conversations, and plenty of coffee.

Building an AI product is never just the job of one team. Product and design need to move in sync, shaping each other’s direction along the way. That’s exactly why our Sr. Product Manager, Hüseyin Umut Dokuzelma, and our Head of Design, Ece Demircioğlu, joined us to share their experiences. They walked us through how ideas evolve into real products, the role design plays in steering roadmaps, the challenges they’ve faced, and why clear communication between teams makes everything possible.

Our Sr. Product Manager Umut and Head of Design Ece shared how product and design come together seamlessly to shape AI products.
Our Sr. Product Manager Umut and Head of Design Ece shared how product and design come together seamlessly to shape AI products.

The energy they brought to the stage — combined with the thoughtful visual design of their presentation — made the session not just informative but also engaging. And once the Q&A began, it turned into an open exchange, where participants asked real questions and received candid, experience-driven answers.

We wrapped up with networking over coffee, where product managers, designers, and industry professionals came together to continue the conversations and connect more personally. These moments reminded us why we started Novus Meetups in the first place: to create spaces where knowledge is shared openly and connections are built naturally.

A heartfelt thank you to QNBEYOND for supporting us for the second time. And if you’d like to keep the conversation going, you can always reach out directly to Umut and Ece on LinkedIn.

This is only the beginning. Novus Meetups will continue, and we’d love to see you at the next one. You can check out our upcoming events here: https://lu.ma/calendar/cal-IoTxogmVo0mN6bX.

Q&A Session from Novus Meetups
Q&A Session from Novus Meetups

The content you're trying to reach doesn't exist. Try to search something different.
The content you're trying to reach doesn't exist.
Try to search something different.
Clear Filters
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Check out our
All-in-One AI platform Dot.

Unifies models, optimizes outputs, integrates with your apps, and offers 100+ specialized agents, plus no-code tools to build your own.