Recap of First Rays Annual Day 2025

Last month, First Rays Annual Day gathering brought together founders, CTOs, CISOs, and operators across our ecosystem to trade notes on whatโ€™s ๐˜ข๐˜ค๐˜ต๐˜ถ๐˜ข๐˜ญ๐˜ญ๐˜บ working with AI in the enterprise and where the sharp edges still are.

To kickstart the panel, Hersh, CEO of Allstacks, showcased how his customers, about 50+ large enterprises, are rolling out AI-tools in their orgs. He highlighted that AI-native engineering platform leaders integrating across the SDLC to measure time, cost, and bottlenecksโ€”and now shipping agents that move from analytics to action. See their recent product announcement of Deep Research agents.

Some of the topics that we covered during the panel.

1) ๐€๐ˆ ๐š๐๐จ๐ฉ๐ญ๐ข๐จ๐ง ๐ข๐ฌ ๐ซ๐ž๐š๐ฅโ€”๐š๐ง๐ ๐ฎ๐ง๐ž๐ฏ๐ž๐ง.

A fresh engineering survey (โ‰ˆ250 ๐˜ญ๐˜ฆ๐˜ข๐˜ฅ๐˜ฆ๐˜ณ๐˜ด ๐˜จ๐˜ญ๐˜ฐ๐˜ฃ๐˜ข๐˜ญ๐˜ญ๐˜บ) shows:

~90% ๐จ๐Ÿ ๐ž๐ง๐ ๐ข๐ง๐ž๐ž๐ซ๐ฌ are already using AI for new code and apps.

~85% ๐Ÿ๐จ๐ซ ๐ญ๐ž๐ฌ๐ญ๐ข๐ง๐ /๐๐€ (many teams started here before code-gen).

~70% ๐Ÿ๐จ๐ซ ๐๐จ๐œ๐ฎ๐ฆ๐ž๐ง๐ญ๐š๐ญ๐ข๐จ๐ง(surprisingly lower than expected).

2) โ€œ๐ƒ๐š๐ฌ๐ก๐›๐จ๐š๐ซ๐๐ฌ ๐š๐ซ๐ž ๐๐ž๐š๐; ๐ฌ๐ญ๐š๐ซ๐ญ ๐ฐ๐ข๐ญ๐ก ๐ญ๐ก๐ž ๐š๐ง๐ฌ๐ฐ๐ž๐ซ.โ€

Leaders are ditching passive dashboards in favor of answer-first experiences (and increasingly, agents) that:

Surface โ€œwhatโ€™s happening, why, and what to do next,โ€

Attach evidence automatically (traces, diffs, PRs, cost deltas), and

Orchestrate remediation (creating tickets, automating rollbacks, proposing fixes).

3) ๐๐ซ๐จ๐ฏ๐ข๐ง๐  ๐‘๐Ž๐ˆ ๐ฐ๐ข๐ญ๐ก๐จ๐ฎ๐ญ ๐ก๐š๐ง๐-๐ฐ๐š๐ฏ๐ข๐ง๐ .

The practical framing that resonated: feature cost before vs. after AI adoption. If โ€œafterโ€ is less by more than the AI toolโ€™s cost, youโ€™re in the green. Teams that can price features (developer time ร— friction) can defend AI budgets credibly.

4) ๐•๐ข๐›๐ž ๐œ๐จ๐๐ข๐ง๐  โ†’ ๐ฏ๐ข๐›๐ž ๐ž๐ง๐ ๐ข๐ง๐ž๐ž๐ซ๐ข๐ง๐ .

As coding assistants proliferate, teams are moving beyond โ€œassist me in this fileโ€ to workflow-level orchestration (a.k.a. โ€œ๐˜ท๐˜ช๐˜ฃ๐˜ฆ ๐˜ฆ๐˜ฏ๐˜จ๐˜ช๐˜ฏ๐˜ฆ๐˜ฆ๐˜ณ๐˜ช๐˜ฏ๐˜จโ€): constructing repeatable multi-step flows that include generation, verification, testing, security checks, and deployment. Itโ€™s less about an LLMโ€™s gut feel and more about a disciplined pipeline.

5) ๐’๐ก๐ข๐ฉ ๐ฌ๐ž๐œ๐ฎ๐ซ๐ž ๐›๐ฒ ๐๐ž๐Ÿ๐š๐ฎ๐ฅ๐ญ (๐จ๐ซ ๐ฉ๐š๐ฒ 4๐ฑ ๐ฅ๐š๐ญ๐ž๐ซ).

Panelists repeatedly emphasized: getting security involved early (requirements, models, prompts, data access, testing) saves ~4 out of 5 dollars versus bolt-on fixes after release. โ€œShift-leftโ€ is not a slogan here; itโ€™s the only way AI features reach production safely.

6) ๐–๐ก๐จ โ€œ๐จ๐ฐ๐ง๐ฌโ€ ๐€๐ˆ ๐ญ๐ซ๐ฎ๐ฌ๐ญ?

The consensus: joint ownership across product, platform/engineering, and security. Security teams define guardrails; platform teams implement policy and enforcement; product owns usability and measurable outcomes. Success cases embed security and platform early in the product loop.

โ€”โ€”โ€”โ€”โ€”โ€”โ€”โ€”โ€”โ€”โ€”โ€”โ€”โ€”โ€”โ€”โ€”โ€”โ€”โ€”โ€”โ€”โ€”โ€”โ€”โ€”โ€”โ€”โ€”-

We were fortunate to have some of our founders, Saurabh Shintre Uri Maoz Aaron Painter and Nir Valtman join in our second panel discussion around Trust, Safety & Identityโ€”โ€œHigh-Trust AI Without the Headachesโ€

Here is a summary of our discussion-

1) High-trust apps are different.

The panelโ€™s opening salvo: AI remains too risky โ€œas-isโ€ for high-trust workflows without additional controls. The examples ranged from public โ€œfunny but costlyโ€ bot mishaps to serious misbehavior in lab settings. The message wasnโ€™t alarmistโ€”just clear: production = controls.

2) Catch problems before they emerge.

Safety teams are using pre-deployment techniquesโ€”from adversarial prompts and jailbreak libraries to model-internals monitoringโ€”to detect harmful tendencies early (bias, manipulation, self-justifying refusal bypasses). Think of it like pre-release pen-testing for models.

3) Defense-in-depth beats model-of-the-week.

Guardrails stack up across layers:

Input: prompt hygiene, policy filtering, identity checks, rate limits.

Model: safety systems, allow/deny lists, tool-use constraints.

Output: toxicity/PII filters, grounding checks, retrieval whitelists.

Environment: least-privilege keys, egress controls, audit logging.

This layering outlasts model churn and gives teams a stable security posture.

4) Identity is the new perimeter.

As more workflows become chat- or agent-driven, the system must know who is asking and what theyโ€™re allowed to do. Strong identity (step-up verification on risky actions, anti-fraud signals, session binding) prevents prompt-driven account takeovers and high-impact abuse.

5) The real-world adoption pattern: it varies.

Across enterprises, usageโ€”and valueโ€”differ by team and use case. Some groups see dramatic productivity gains (e.g., code-gen plus automated tests), others stall without proper data access, evaluation, or change management. Governance and enablement determine the slope.

Panel with Hersh, Aftab, Sugam, Yossi and Alok

Panel with Amit, Aaron, Uri, Saurabh and Nir

โ€œOnly ~50% ๐ซ๐ž๐ฉ๐จ๐ซ๐ญ ๐Ÿ๐จ๐ซ๐ฆ๐š๐ฅ ๐จ๐ซ๐ ๐š๐ง๐ข๐ณ๐š๐ญ๐ข๐จ๐ง๐š๐ฅ ๐ฌ๐ฎ๐ฉ๐ฉ๐จ๐ซ๐ญ; ~18% said teams are using AI regardless. ๐˜›๐˜ณ๐˜ข๐˜ฏ๐˜ด๐˜ญ๐˜ข๐˜ต๐˜ช๐˜ฐ๐˜ฏ: ๐˜ฃ๐˜ฐ๐˜ต๐˜ต๐˜ฐ๐˜ฎ-๐˜ถ๐˜ฑ ๐˜ถ๐˜ด๐˜ข๐˜จ๐˜ฆ ๐˜ช๐˜ด ๐˜ฐ๐˜ถ๐˜ต๐˜ฑ๐˜ข๐˜ค๐˜ช๐˜ฏ๐˜จ ๐˜ต๐˜ฐ๐˜ฑ-๐˜ฅ๐˜ฐ๐˜ธ๐˜ฏ ๐˜ฑ๐˜ฐ๐˜ญ๐˜ช๐˜ค๐˜บ.โ€
— Hersh, CEO of AllStacks
โ€œ๐Š๐ž๐ฒ ๐ญ๐š๐ค๐ž๐š๐ฐ๐š๐ฒ๐ฌ ๐Ÿ๐ซ๐จ๐ฆ ๐ญ๐ก๐ž ๐๐š๐ง๐ž๐ฅ
๐€๐๐จ๐ฉ๐ญ๐ข๐จ๐ง ๐ข๐ฌ ๐š๐ฅ๐ซ๐ž๐š๐๐ฒ ๐ก๐ž๐ซ๐ž. Policy and governance must catch up to usage, not the other way around.

๐Œ๐จ๐ฏ๐ž ๐Ÿ๐ซ๐จ๐ฆ ๐š๐ง๐š๐ฅ๐ฒ๐ญ๐ข๐œ๐ฌ ๐ญ๐จ ๐š๐ ๐ž๐ง๐œ๐ฒ. Reorient roadmaps around answer-first experiences and automated actions with audit trails.

๐๐ฎ๐š๐ง๐ญ๐ข๐Ÿ๐ฒ ๐‘๐Ž๐ˆ ๐š๐ญ ๐ญ๐ก๐ž ๐Ÿ๐ž๐š๐ญ๐ฎ๐ซ๐ž ๐ฅ๐ž๐ฏ๐ž๐ฅ. Track feature cost deltas to justify AI spend.

๐Ž๐ฉ๐ž๐ซ๐š๐ญ๐ข๐จ๐ง๐š๐ฅ๐ข๐ณ๐ž โ€œ๐ฏ๐ข๐›๐ž ๐ž๐ง๐ ๐ข๐ง๐ž๐ž๐ซ๐ข๐ง๐ .โ€ Treat AI as a workflow with gates (tests, SAST/DAST, policy, approvals), not a magic autocomplete.

๐ˆ๐ง๐ฌ๐ญ๐ข๐ญ๐ฎ๐ญ๐ข๐จ๐ง๐š๐ฅ๐ข๐ณ๐ž ๐ฌ๐ก๐š๐ซ๐ž๐ ๐จ๐ฐ๐ง๐ž๐ซ๐ฌ๐ก๐ข๐ฉ of trust across security, platform, and product.โ€
— Panels
 
 
Next
Next

Databricks is raising a Series K Investment at >$100 billion valuation