Recap of First Rays Annual Day 2025
Last month, First Rays Annual Day gathering brought together founders, CTOs, CISOs, and operators across our ecosystem to trade notes on whatโs ๐ข๐ค๐ต๐ถ๐ข๐ญ๐ญ๐บ working with AI in the enterprise and where the sharp edges still are.
To kickstart the panel, Hersh, CEO of Allstacks, showcased how his customers, about 50+ large enterprises, are rolling out AI-tools in their orgs. He highlighted that AI-native engineering platform leaders integrating across the SDLC to measure time, cost, and bottlenecksโand now shipping agents that move from analytics to action. See their recent product announcement of Deep Research agents.
Some of the topics that we covered during the panel.
1) ๐๐ ๐๐๐จ๐ฉ๐ญ๐ข๐จ๐ง ๐ข๐ฌ ๐ซ๐๐๐ฅโ๐๐ง๐ ๐ฎ๐ง๐๐ฏ๐๐ง.
A fresh engineering survey (โ250 ๐ญ๐ฆ๐ข๐ฅ๐ฆ๐ณ๐ด ๐จ๐ญ๐ฐ๐ฃ๐ข๐ญ๐ญ๐บ) shows:
~90% ๐จ๐ ๐๐ง๐ ๐ข๐ง๐๐๐ซ๐ฌ are already using AI for new code and apps.
~85% ๐๐จ๐ซ ๐ญ๐๐ฌ๐ญ๐ข๐ง๐ /๐๐ (many teams started here before code-gen).
~70% ๐๐จ๐ซ ๐๐จ๐๐ฎ๐ฆ๐๐ง๐ญ๐๐ญ๐ข๐จ๐ง(surprisingly lower than expected).
2) โ๐๐๐ฌ๐ก๐๐จ๐๐ซ๐๐ฌ ๐๐ซ๐ ๐๐๐๐; ๐ฌ๐ญ๐๐ซ๐ญ ๐ฐ๐ข๐ญ๐ก ๐ญ๐ก๐ ๐๐ง๐ฌ๐ฐ๐๐ซ.โ
Leaders are ditching passive dashboards in favor of answer-first experiences (and increasingly, agents) that:
Surface โwhatโs happening, why, and what to do next,โ
Attach evidence automatically (traces, diffs, PRs, cost deltas), and
Orchestrate remediation (creating tickets, automating rollbacks, proposing fixes).
3) ๐๐ซ๐จ๐ฏ๐ข๐ง๐ ๐๐๐ ๐ฐ๐ข๐ญ๐ก๐จ๐ฎ๐ญ ๐ก๐๐ง๐-๐ฐ๐๐ฏ๐ข๐ง๐ .
The practical framing that resonated: feature cost before vs. after AI adoption. If โafterโ is less by more than the AI toolโs cost, youโre in the green. Teams that can price features (developer time ร friction) can defend AI budgets credibly.
4) ๐๐ข๐๐ ๐๐จ๐๐ข๐ง๐ โ ๐ฏ๐ข๐๐ ๐๐ง๐ ๐ข๐ง๐๐๐ซ๐ข๐ง๐ .
As coding assistants proliferate, teams are moving beyond โassist me in this fileโ to workflow-level orchestration (a.k.a. โ๐ท๐ช๐ฃ๐ฆ ๐ฆ๐ฏ๐จ๐ช๐ฏ๐ฆ๐ฆ๐ณ๐ช๐ฏ๐จโ): constructing repeatable multi-step flows that include generation, verification, testing, security checks, and deployment. Itโs less about an LLMโs gut feel and more about a disciplined pipeline.
5) ๐๐ก๐ข๐ฉ ๐ฌ๐๐๐ฎ๐ซ๐ ๐๐ฒ ๐๐๐๐๐ฎ๐ฅ๐ญ (๐จ๐ซ ๐ฉ๐๐ฒ 4๐ฑ ๐ฅ๐๐ญ๐๐ซ).
Panelists repeatedly emphasized: getting security involved early (requirements, models, prompts, data access, testing) saves ~4 out of 5 dollars versus bolt-on fixes after release. โShift-leftโ is not a slogan here; itโs the only way AI features reach production safely.
6) ๐๐ก๐จ โ๐จ๐ฐ๐ง๐ฌโ ๐๐ ๐ญ๐ซ๐ฎ๐ฌ๐ญ?
The consensus: joint ownership across product, platform/engineering, and security. Security teams define guardrails; platform teams implement policy and enforcement; product owns usability and measurable outcomes. Success cases embed security and platform early in the product loop.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโ-
We were fortunate to have some of our founders, Saurabh Shintre Uri Maoz Aaron Painter and Nir Valtman join in our second panel discussion around Trust, Safety & IdentityโโHigh-Trust AI Without the Headachesโ
Here is a summary of our discussion-
1) High-trust apps are different.
The panelโs opening salvo: AI remains too risky โas-isโ for high-trust workflows without additional controls. The examples ranged from public โfunny but costlyโ bot mishaps to serious misbehavior in lab settings. The message wasnโt alarmistโjust clear: production = controls.
2) Catch problems before they emerge.
Safety teams are using pre-deployment techniquesโfrom adversarial prompts and jailbreak libraries to model-internals monitoringโto detect harmful tendencies early (bias, manipulation, self-justifying refusal bypasses). Think of it like pre-release pen-testing for models.
3) Defense-in-depth beats model-of-the-week.
Guardrails stack up across layers:
Input: prompt hygiene, policy filtering, identity checks, rate limits.
Model: safety systems, allow/deny lists, tool-use constraints.
Output: toxicity/PII filters, grounding checks, retrieval whitelists.
Environment: least-privilege keys, egress controls, audit logging.
This layering outlasts model churn and gives teams a stable security posture.
4) Identity is the new perimeter.
As more workflows become chat- or agent-driven, the system must know who is asking and what theyโre allowed to do. Strong identity (step-up verification on risky actions, anti-fraud signals, session binding) prevents prompt-driven account takeovers and high-impact abuse.
5) The real-world adoption pattern: it varies.
Across enterprises, usageโand valueโdiffer by team and use case. Some groups see dramatic productivity gains (e.g., code-gen plus automated tests), others stall without proper data access, evaluation, or change management. Governance and enablement determine the slope.
Panel with Hersh, Aftab, Sugam, Yossi and Alok
Panel with Amit, Aaron, Uri, Saurabh and Nir
โOnly ~50% ๐ซ๐๐ฉ๐จ๐ซ๐ญ ๐๐จ๐ซ๐ฆ๐๐ฅ ๐จ๐ซ๐ ๐๐ง๐ข๐ณ๐๐ญ๐ข๐จ๐ง๐๐ฅ ๐ฌ๐ฎ๐ฉ๐ฉ๐จ๐ซ๐ญ; ~18% said teams are using AI regardless. ๐๐ณ๐ข๐ฏ๐ด๐ญ๐ข๐ต๐ช๐ฐ๐ฏ: ๐ฃ๐ฐ๐ต๐ต๐ฐ๐ฎ-๐ถ๐ฑ ๐ถ๐ด๐ข๐จ๐ฆ ๐ช๐ด ๐ฐ๐ถ๐ต๐ฑ๐ข๐ค๐ช๐ฏ๐จ ๐ต๐ฐ๐ฑ-๐ฅ๐ฐ๐ธ๐ฏ ๐ฑ๐ฐ๐ญ๐ช๐ค๐บ.โ
โ๐๐๐ฒ ๐ญ๐๐ค๐๐๐ฐ๐๐ฒ๐ฌ ๐๐ซ๐จ๐ฆ ๐ญ๐ก๐ ๐๐๐ง๐๐ฅ
๐๐๐จ๐ฉ๐ญ๐ข๐จ๐ง ๐ข๐ฌ ๐๐ฅ๐ซ๐๐๐๐ฒ ๐ก๐๐ซ๐. Policy and governance must catch up to usage, not the other way around.
๐๐จ๐ฏ๐ ๐๐ซ๐จ๐ฆ ๐๐ง๐๐ฅ๐ฒ๐ญ๐ข๐๐ฌ ๐ญ๐จ ๐๐ ๐๐ง๐๐ฒ. Reorient roadmaps around answer-first experiences and automated actions with audit trails.
๐๐ฎ๐๐ง๐ญ๐ข๐๐ฒ ๐๐๐ ๐๐ญ ๐ญ๐ก๐ ๐๐๐๐ญ๐ฎ๐ซ๐ ๐ฅ๐๐ฏ๐๐ฅ. Track feature cost deltas to justify AI spend.
๐๐ฉ๐๐ซ๐๐ญ๐ข๐จ๐ง๐๐ฅ๐ข๐ณ๐ โ๐ฏ๐ข๐๐ ๐๐ง๐ ๐ข๐ง๐๐๐ซ๐ข๐ง๐ .โ Treat AI as a workflow with gates (tests, SAST/DAST, policy, approvals), not a magic autocomplete.
๐๐ง๐ฌ๐ญ๐ข๐ญ๐ฎ๐ญ๐ข๐จ๐ง๐๐ฅ๐ข๐ณ๐ ๐ฌ๐ก๐๐ซ๐๐ ๐จ๐ฐ๐ง๐๐ซ๐ฌ๐ก๐ข๐ฉ of trust across security, platform, and product.โ