AWS Lambda Durable Functions expand serverless capabilities while raising lock‑in questions
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you

Snowflake sharpens AI and migration playbook while a major outage raises resilience questions
Snowflake has accelerated feature launches, strategic purchases and commercial model partnerships to broaden its AI, observability and migration capabilities, while a late‑2025 platform failure that lasted many hours across multiple regions exposed operational fragility. Recent moves include a multi‑year, roughly $200 million commercial agreement to surface OpenAI models inside the Data Cloud and the rollout of Cortex Code, a data‑aware coding assistant, but integration, governance and reliability will determine whether these advances become durable customer advantages.

Databricks launches Lakebase — a serverless OLTP platform that rethinks transactional databases
Databricks unveiled Lakebase, a serverless operational database that runs PostgreSQL-compatible compute over lakehouse storage to make transactional data immediately queryable by analytics engines. Early customers report dramatic cuts in application delivery time, while the architecture reframes database management as a telemetry and analytics problem suited to programmatic provisioning and AI-driven agents.

Amazon leans on in‑house Trainium chips to cut AI costs and jump‑start AWS growth
Amazon is accelerating deployment of its custom Trainium AI accelerators to lower customer compute costs and shore up AWS revenue momentum. The move sits inside a broader industry shift toward bespoke silicon — amid supply‑chain constraints and competing hyperscaler designs — so investors will treat upcoming AWS results as a test of whether these chips can produce sustained growth and margin gains.

OpenAI pushes agents from ephemeral assistants to persistent workers with memory, shells, and Skills
OpenAI’s Responses API now adds server-side state compaction, hosted shell containers, and a Skills packaging standard to support long-running, reproducible agent workflows. Early partner reports and ecosystem moves (including large-context advances from rivals) show the feature set accelerates production adoption while concentrating responsibility for governance, secrets, and runtime controls.

Amazon tightens controls after AI coding assistant triggers limited AWS disruptions
Two internal incidents tied to Amazon’s developer-facing AI tools prompted access-control fixes and mandatory reviews, with Amazon calling the root cause human permissions rather than autonomous AI behavior. AWS says customer impact was minimal and has rolled out peer review and training to reduce recurrence.
How AI Is Reshaping Engineering Workflows in the U.S.
AI is shifting engineering from manual implementation toward faster, experiment-driven cycles, greater emphasis on documentation and intent, and new platform and data‑architecture demands. Real‑world platform partnerships (for example, Snowflake’s reported deal to embed OpenAI models within its data platform) illustrate both the convenience of in‑place model access and the procurement, cost, and governance tradeoffs that amplify the need for provenance, policy automation, unified data views, and platform engineering to avoid opaque agentic outputs and vendor lock‑in.

Private cloud regains ground as AI reshapes cloud cost and risk calculus
Enterprises are pushing persistent inference, embedding caches, and retrieval layers into private or localized clouds to tame rising AI inference costs, latency and correlated outage risk, while keeping burst training and large-scale experimentation in public clouds. This hybrid posture is reinforced by shifts in data architecture toward projection-first stores, growing endpoint inference capability, and silicon-market dynamics that favor bespoke, on-prem stacks.
Deno launches Sandbox for AI-generated code and promotes Deploy to GA
Deno introduced a sandboxed runtime aimed at safely executing code produced by AI agents and released its reworked serverless platform as generally available. The sandbox isolates execution in lightweight microVMs, enforces network egress controls, and protects credentials while Deploy provides a new management plane and execution environment for JavaScript and TypeScript workloads.