top of page

AI Insights: Key Global Developments in March 2026

Welcome to the March 2026 edition of our global AI update. 

The period from late February into mid-March saw AI moving from theory into practice. New model releases grabbed headlines (see OpenAI’s GPT-5.4 below), but the bigger story was how AI is being embedded into real-world systems and infrastructure. Tech leaders announced large-scale compute projects, cloud partnerships and enterprise rollouts, while regulators clarified rules for transparency and sovereignty. 

In short, the industry is shifting from experiment to execution- the organizations that integrate AI into core workflows and governance will pull ahead.

Here are the major moves worth noting.

OpenAI- GPT-5.4


OpenAI launched GPT-5.4 on March 5, 2026. 

This new version of the GPT series is explicitly designed for professional tasks, delivering far better performance on complex, multi-step projects than previous models. In internal benchmarks (the GDPval test of real-world job tasks), GPT-5.4 achieved a new state-of-the-art 83.0% success rate versus 70.9% for GPT-5.2. It excels at creating long documents, spreadsheets, slide decks and legal analyses with fewer errors. 

For example, it scored 91% on a legal-document benchmark, far above earlier models. 

OpenAI also gave GPT-5.4 “native computer-use” skills- it can navigate software UIs by interpreting screenshots and issuing mouse/keyboard commands. 

In practice this means agents using GPT-5.4 can browse websites, fill forms and manipulate documents on their own, improving automation. The model is also more efficient - it uses significantly fewer tokens than GPT-5.2 for the same tasks, so it runs faster and cheaper. GPT-5.4 is now available via ChatGPT (“GPT-5.4 Thinking/Pro”) and the API (as gpt-5.4 and gpt-5.4-pro), and OpenAI released a ChatGPT-for-Excel add-in to put its capabilities directly into analysts’ spreadsheets. In short, GPT-5.4 sets a new bar for AI-driven productivity, pushing generative AI further into everyday business use.

Source- OpenAI

Google DeepMind- Nano Banana 2

Google unveiled Nano Banana 2 on Feb 26, 2026 as a major upgrade to its image generation models. 

This model merges the high-quality output of Google’s “Pro” image models with the lightning speed of its “Flash” inference engines. In practice, Nano Banana 2 delivers professional-grade image creation and editing (up to 4K resolution) much faster than before. 

According to Google, it “combines the power of Nano Banana Pro and the speed of Gemini Flash,” meaning developers and content teams can generate realistic images and iterate quickly. The model also emphasizes control and consistency- for example, it can better maintain the same subject across multiple images and supports fine-tuned editing prompts. Nano Banana 2 is being deployed across Google’s platforms: it’s now the default in the Gemini mobile app, and it will power image features in Google Search, Google Flow (their creative canvas product), Cloud APIs, and Ads. 

Notably, Google continued its SynthID program for content provenance: Nano Banana 2 outputs include C2PA metadata and watermarks to flag AI-generated content. This helps platforms comply with upcoming transparency rules. 

Overall, Nano Banana 2 shows Google pushing generative AI into products at scale- enterprises and advertisers should expect faster, more flexible image tools from Google services in the coming months.

Source- Google

Apple- MacBook Air with M5 Chip


Apple has announced a new MacBook Air powered by its M5 system-on-chip. While consumer-focused, this hardware update has AI implications: each M5 CPU core now includes a built-in Neural Accelerator, aiming to speed up on-device machine learning. Apple claims the M5 offers up to 4× faster AI task performance than the M4, and 9.5× faster than the original M1. 

For example, the new Air can run local large language models more smoothly and perform intensive tasks (like video analysis or natural language processing) much faster. Apple specifically highlights that the M5 makes the MacBook Air “an AI Mac,” ready for high-bandwidth on-device AI workloads. 

Other upgrades include a 512GB base SSD (double last year’s entry spec) and the latest Wi-Fi 7 networking. In practice, this means businesses and creators who need ultra-portable machines can now work with AI tools (coding assistants, design generators, etc.) on the go with better performance. The rollout of the M5 Air underlines how Apple is gearing its consumer products toward the AI era- enterprises evaluating laptops should note that Apple’s silicon line is increasingly optimized for AI.

Source- Apple

Amazon & OpenAI- Strategic Cloud Partnership

Amazon and OpenAI announced a landmark multi-year partnership to accelerate AI innovation. 

Under the deal, Amazon is investing $50 billion in OpenAI (adding $35B to its prior investment) and OpenAI is committing to run on AWS infrastructure. Critically, AWS will build a new “Stateful Runtime Environment” for OpenAI’s frontier models (like GPT-5.x) using AWS Foundry technology. AWS thus becomes OpenAI’s exclusive third-party cloud provider for its most advanced AI workloads. The expanded collaboration includes a massive 8-year contract (totaling roughly $138B) and a 1.2 GW data-center lease with SB Energy in Texas.

In short, OpenAI is locking in AWS at a truly huge scale. This move signals that hyperscalers and AI labs are tying themselves together: OpenAI locks in long-term compute on AWS, while Amazon cements itself as the critical infrastructure backbone for top-tier AI. 

For enterprises, this underscores AWS’s centrality for AI: any company using OpenAI’s models at scale will likely do so on AWS under this new framework. It also highlights that cloud providers will continue to compete fiercely to host the AI workloads of the future.

Source- Amazon

Accenture & Mistral AI- Enterprise AI Alliance


Accenture and French AI firm Mistral AI announced a strategic collaboration to help businesses deploy advanced AI at scale. 

The goal is to combine Accenture’s global consulting and industry expertise with Mistral’s high-performance AI models and software. Specifically, they will co-develop enterprise-grade AI solutions tailored to industry use cases, and train thousands of Accenture professionals on Mistral’s platform. 

This partnership emphasizes “sovereign” or customizable AI- the idea that companies can use powerful models while retaining control over data and governance. Accenture’s announcement highlights that clients in Europe and beyond can “rapidly move to secure, large-scale AI deployments aligned with regional requirements”. 

Under the deal, Accenture will become a customer of Mistral AI (using its Studio and models internally) and will embed Mistral’s tools into its offerings. For enterprises, this means new options for deploying cutting-edge AI via Accenture’s services, but with attention to compliance and data control. It also shows the growing trend of consultancies partnering with model vendors to deliver end-to-end AI solutions.


Source- Accenture

European Union- Draft Code of Practice on AI-Generated Content

The European Commission published a second draft of a voluntary Code of Practice for marking and labelling AI-generated content. 

This effort, led by the EU AI Office, aims to help companies comply with the AI Act’s Article 50 transparency rules. The updated draft- created by independent experts using stakeholder feedback - streamlines requirements for both AI providers and content deployers.

Key features include a two-layered marking approach (secure metadata plus watermarking) and a proposed common “AI-generated content” icon for web media. The Commission says the new code is more flexible and clear than the first version, reducing burdens while still ensuring AI content is detectable. Importantly, the code is voluntary and focused on feasibility- firms are encouraged to adopt open standards for marking/scrambling and to label synthetic images and deepfakes. 

The Commission will collect feedback on this draft through March and aims to finalize the code by June- ahead of the AI Act’s Aug 2026 effective date. AI companies and platforms should watch these developments closely, as the final code will set expectations for how to handle AI content transparency in practice.

Source- EU

Perplexity - Personal & Enterprise AI Computers

Perplexity is trying a new idea where AI works more like a computer that can do tasks for you. In early March, the company introduced Personal Computer, a Mac mini–based system designed to run continuously as a user’s digital assistant. The device connects local files and apps with Perplexity’s cloud services, allowing it to handle tasks and workflows across different tools and devices.

To keep things under control, several safeguards are built in:

  • Users must approve sensitive actions

  • Every session is logged

  • A built-in kill switch lets users stop activity instantly

The feature is currently rolling out through a limited waitlist and is available on Mac first as Perplexity tests the concept.

At the same time, the company expanded its Perplexity Computer platform for enterprise teams. The system can connect directly with business tools such as:

  • Snowflake

  • Salesforce

  • HubSpot

This allows teams to query databases, pull information, and generate reports across different systems. In an internal test of 16,000+ queries, the platform completed the equivalent of 3.25 years of work in four weeks, saving around $1.6 million in labor costs.

The enterprise version also focuses heavily on security, including:

  • SOC 2 Type II compliance

  • SAML single sign-on

  • Audit logs

  • Isolated environments for each query

Overall, the update shows how Perplexity is positioning AI not just as a chatbot, but as a system that can work continuously across personal devices and enterprise software.

Source- Perplexity


Stay informed with our regulatory updates and join us next month for the latest developments in risk management and compliance!

For any feedback or requests for coverage in future issues (e.g., additional countries or topics), please contact us at info@riskinfo.ai. We hope you found this newsletter insightful.


Best regards,

The RiskInfo.ai Team

bottom of page