AI Pulse – Edition #3
We are back to bring to you the latest scoop from the AI world. The updates in brief?
In three words: Two full weeks.
In fifteen words: Two full weeks, culminating in OpenAI’s first DevDay in SF, delivering early Christmas to developers.
In a bit more detail: OpenAI boosts developer efficiency and cost savings with model distilling and prompt caching, while unlocking new possibilities with Advanced Voice Mode in ChatGPT and a real-time API. Google revolutionizes chip design with AlphaChip and rolls out upgrades to their Gemini AI models. Meta’s Llama 3.2 brings vision to edge devices, making technology smarter and more accessible.
Global collaborations and infrastructure investments are heating up: G42 and NVIDIA launch a climate tech lab, and Microsoft makes massive investments to power AI innovation worldwide, even tapping into nuclear energy!
Policy discussions remain tense as California’s Governor vetoes an AI safety bill, while over 100 companies join the EU’s AI Pact ahead of the EU AI Act’s implementation.
On the research front, AI is starting to remember like us, and Google’s helping save whales with bioacoustics. In arts and entertainments, Refik Anadol makes waves with an AI-powered coral marvel at the United Nations, and Lionsgate teams up with Runway to bring AI into filmmaking.
Happy reading! Your AI Pulse Team
@platypus @trenton.dambrowitz @thinktank @dignity_for_all @PaulBellow @vb @jr.2509
P.S.: Get in touch if you’d like to join our growing team.
Table of contents
1. Technology Updates
2. Infrastructure
3. Government & Policy
4. Legal Matters
5. AI Economics
6. Research
7. Entertainment
8. Dev Alerts
1. Technology
Early Christmas for developers
OpenAI kicks off first DevDay in San Francisco with powerful new releases including real-time API, model distilling, prompt caching, vision fine-tuning and new playground features
Details
The Realtime API Beta enables developers to create low-latency, multimodal conversational experiences via WebSockets. It supports text and audio inputs and outputs, featuring native speech-to-speech capabilities, steerable voices, and simultaneous multimodal output. Model distillation allows fine-tuning of smaller models using outputs from larger ones. Prompt caching reduces latency and costs by reusing previously processed prompts, offering up to 80% faster response times and 50% cost savings for long prompts without code changes or additional fees. Vision fine-tuning lets developers train models with up to 50,000 images. New Playground features enable quick prototyping by describing a use case, with automatic generation of prompts and schemas for functions and structured outputs. See our Dev Alerts section for more details.
Better late than never – Say hello to OpenAI’s Advance Voice
OpenAI rolls out Advanced Voice for Plus and Team users, enabling more natural, real-time conversations with ChatGPT using GPT-4o’s native audio capabilities.
Details
The Advanced Voice Mode (AVM) is now available on iOS, Android, and macOS platforms, excluding the EU, Switzerland, Iceland, Norway, and Liechtenstein. It captures non-verbal cues like speech speed and responds with emotion for more natural interactions. Users can choose from nine voice options and switch between them during conversations. AVM also supports features like Memory and Custom Instructions. Currently, a daily usage limit applies, after which users revert to Standard Voice mode that transcribes speech to text.
Abu Dhabi’s New AI Climate Lab
AI and cloud computing company G42 and NVIDIA partner to launch the Earth-2 Climate Tech Lab in Abu Dhabi to enhance global weather forecasting with AI-powered, high-resolution simulation technologies
Source: Abu Dhabi Media Office
Details
The lab will utilize NVIDIA’s Earth-2, an open platform that accelerates climate predictions with AI-enhanced, high-resolution simulations, to develop a square-kilometer resolution weather forecasting model. The facility will serve as a research and development hub to create tailored climate solutions leveraging over 100 petabytes of geophysical data.
Improving AI’s Access to Knowledge
Anthropic introduces Contextual Retrieval, enhancing RAG by adding context to information chunks, improving retrieval accuracy by up to 67%
Details
Contextual Retrieval tackles the issue of lost context in traditional RAG systems when documents are divided into chunks. By adding explanatory context to each chunk before embedding or indexing, it preserves essential details for retrieval. The approach combines Contextual Embeddings, which attach relevant background information to data chunks to aid AI interpretation, and Contextual BM25, which improves lexical matching by ranking text based on term frequency and document length. Individually, these methods reduce retrieval failure rates by 49%, and by 67% when combined with a reranking system that refines chunk selection during queries.
Meta’s Llama 3.2 Brings Vision to Edge Devices
Meta releases Llama 3.2, introducing small and medium-sized vision LLMs and lightweight text-only models, enabling edge AI with on-device processing for enhanced privacy and instant responses
Details
The vision models are available in 11B and 90B sizes, supporting image reasoning tasks including document-level understanding, image captioning, and visual grounding and enabling applications such as analyzing sales graphs or navigating maps. The lightweight text-only models, at 1B and 3B sizes, enable multilingual text generation and tool calling on select mobile devices. All models can be downloaded from llama.com and Hugging Face.
Nvidia’s Leaked Blackwell GPUs
Rumored specs of Nvidia’s upcoming RTX 5090 and 5080 cards reveal massive power but potentially disappointing VRAM for the 5080
Details
Leaked specs suggest the RTX 5090 will feature 21,760 CUDA cores, a 512-bit memory bus, 32GB of GDDR7 RAM, and a 600W power draw, yet may retain a two-slot cooler due to redesigned cooling. The RTX 5080 is rumored to have 10,572 CUDA cores, a 256-bit bus, 16GB of GDDR7 VRAM, and a 400W draw. Critics are concerned that the 5080’s 16GB VRAM is insufficient for a high-end card, echoing backlash from the RTX 4080’s 12GB variant.
Google Upgrades Gemini AI Models
Google releases Gemini-1.5-Pro-002 and Gemini-1.5-Flash-002, featuring improved performance in math, long context, and vision tasks
Details
The updated models achieve a 7% increase on the MMLU-Pro benchmark and 20% improvements on the MATH and HiddenMath benchmarks along with enhancements of 2-7% in visual understanding and Python code generation. In response to developer feedback, default output lengths have been reduced by 5-20% for tasks like summarization and question answering to make responses more concise. Google has also updated the experimental Gemini-1.5-Flash-8B model, providing enhanced performance across text and multimodal use cases. The models are available for free through Google AI Studio and the Gemini API, and accessible to larger organizations on Vertex AI.
Google’s AlphaChip Supercharges Chip Design
Google introduces AlphaChip, an AI-driven method that accelerates chip design by significantly reducing layout creation time while improving performance across various industries.
Details
AlphaChip leverages reinforcement learning and graph neural networks to optimize chip floorplanning, reducing design time from months to hours. It has been integrated into Google’s Tensor Processing Units across three generations, including the Trillium series, and adopted by partners like MediaTek for advanced chips like the Dimensity Flagship 5G. The system treats chip floorplanning as a problem-solving task, incrementally improving component placement through rewards based on layout quality.
NEC’s AI Fact-Checking Push
NEC Corporation develops AI to analyze the trustworthiness of online information, supporting fact-checking efforts in Japan to counter false and misleading content
Details
Carried out as part of a project under Japan’s Ministry of Internal Affairs and Communications, the technology seeks to leverage LLMs to assess the authenticity of online content across text, images, video, and audio. The AI detects inconsistencies and verifies sources, producing reports similar to those by fact-checking experts. This tool aims to assist organizations like the Japan Fact-Check Center and media outlets in combating misinformation.
2. Infrastructure
Empowering AI Development in Low- and Middle-Income Countries
OpenAI launches the OpenAI Academy, providing training, API credits, and community support to developers and organizations harnessing AI to solve local challenges and drive economic growth in low- and middle-income countries
Details
The Academy will grant $1 million in API credits, provide training and guidance from OpenAI experts, and build a global network for collaboration. It plans to partner with philanthropists to invest in community-focused organizations. The program builds on OpenAI’s support for developers addressing global challenges, aiming to make AI accessible and beneficial to diverse communities by empowering those familiar with local cultures and social dynamics.
Microsoft Invests R$14.7 Billion in Brazil’s AI Boom
Microsoft is set to invest R$14.7 billion in Brazil, enhancing AI infrastructure and launching ConectAI to train 5 million people, accelerating the country’s AI innovation and workforce development
Details
Over the next three years, Microsoft will expand its cloud and AI infrastructure in Brazil, focusing on its São Paulo datacenters. Through partnerships with organizations like SENAI and UNICEF, the ConectAI initiative will offer certifications and educational programs to train 5 million people in AI skills. The investment also supports public sector efficiency, workforce development, and sustainability efforts through renewable energy projects, aligning with Brazil’s push for AI-driven economic growth and fostering an inclusive AI ecosystem.
Global Infrastructure Investments
Microsoft, BlackRock, Global Infrastructure Partners (GIP), and MGX launch a $100 billion partnership with NVIDIA support to invest in AI data centers and power infrastructure in the US and partner countries
Details
The Global AI Infrastructure Investment Partnership (GAIIP) aims to mobilize up to $100 billion, including $30 billion in private equity and debt financing, to meet the growing demand for AI capabilities. NVIDIA will provide expertise in AI data centers and factories to support the initiative. The partnership will operate on an open architecture, offering access to a diverse range of partners and companies.
Microsoft Powers AI with Nuclear Energy
Microsoft partners with Constellation Energy to resurrect Pennsylvania’s Three Mile Island nuclear plant, aiming to power AI data centers with carbon-free electricity, pending regulatory approvals
Details
The plant, retired in 2019 due to economic factors, will undergo a $1.6 billion revival to produce 835 megawatts—enough to power about 700,000 homes. Microsoft’s 20-year power purchase agreement aims to offset data center electricity use and further its commitment to grid decarbonization. The project is planned to be completed by 2028 but awaits key regulatory approvals, with Constellation yet to file for restart permission from the Nuclear Regulatory Commission.
Zenlayer Strengthens AI Infrastructure in Asia
Zenlayer expands its network infrastructure across Asia, delivering ultra-low latency and high-bandwidth connections to support AI development, particularly in Southeast Asia, with a capacity of 100 Tbps and integrated AI support services
Details
The upgrade, centered around Singapore, links 60 data centers across countries including Singapore, Hong Kong, Indonesia, Japan, Thailand, Vietnam, Malaysia, and the Philippines. The network, which features a total capacity of approximately 100 Tbps, uses 800-gigabyte single-fiber bandwidth technology to provide faster and more reliable data transmission. In addition, Zenlayer is offering a range of Nvidia GPU resources and managed AI data center services, providing end-to-end support for organizations to scale AI operations throughout Asia.
3. Government & Policy
California Governor Vetoes AI Safety Bill, Citing Overreach
Governor Gavin Newsom vetoed the contentious AI safety bill SB-1047, arguing it was overly broad and could hamper innovation, while ordering state agencies to assess AI risks
Details
Governor Newsom criticized the bill for not distinguishing between high-risk AI applications and basic functions, saying it could impose stringent standards universally. Bill author Senator Scott Wiener expressed disappointment, arguing the veto leaves California less safe and that voluntary industry commitments are insufficient. Newsom indicated willingness to collaborate on future AI legislation and suggested California might need to act independently if federal action stalls.
Over 100 Companies Commit to EU AI Pact for Ethical AI
Companies across industries voluntarily adopt principles of the upcoming EU AI Act to promote safe and responsible AI innovation ahead of its enforcement.
Source 1: European Commission, Source 2: Reuters
Details
Over 100 companies, including multinationals and SMEs from sectors like IT, healthcare, banking, automotive, and aerospace, have signed the EU AI Pact and its voluntary pledges. The commitments include fostering AI adoption, identifying potential high-risk AI systems, and promoting awareness of ethical AI development among staff. OpenAI reiterated its commitment, while Meta has chosen not to immediately join, focusing instead on direct compliance with the AI Act.
US to Host Global AI Safety Summit
The US will host a global AI safety summit on November 20-21 in San Francisco, gathering international leaders to promote cooperation on AI safety, security, and trust
Details
The summit will be the first meeting of the International Network of AI Safety Institutes. Members include Australia, Canada, the EU, France, Japan, Kenya, South Korea, Singapore, Britain, and the U.S. The event aims to foster technical collaboration among experts to advance global knowledge sharing on AI safety. It also serves as a precursor to the AI Action Summit in Paris scheduled for February.
OECD and UN Collaborate on AI Governance
OECD and UN join forces to enhance global AI governance with regular science-based AI risk and opportunity assessments
Details
The partnership leverages the OECD’s AI initiatives, such as the AI Policy Observatory and the Global Partnership on AI, alongside the UN’s global reach to support member states and stakeholders in advancing inclusive and trustworthy AI governance. It will convene leading scientists and academic centers to strengthen policy responses to AI’s opportunities and risks.
US-UAE AI Partnership
The United States and the United Arab Emirates agree to enter into a memorandum of understanding to deepen their cooperation on AI
Details
The collaboration aims to promote responsible AI by fostering international frameworks and standards that ensure ethical development and protect human rights. Key focus areas include aligning regulatory frameworks to encourage innovation while safeguarding national security, conducting ethical AI research to address bias and discrimination, enhancing cybersecurity to manage emerging technological risks, and facilitating bilateral trade and investment in AI technologies. Both countries will also work on talent development through joint training programs for AI professionals and promote the use of clean energy to meet the energy demands of AI systems.
Understanding the Systemic Risk and Macro-Impact of AI
U.S. SEC Chair Gary Gensler, Bank of Canada Governor Tiff Macklem, and Dutch central banker Steven Maijoor caution about the potential systemic risk that the reliance on dominant AI platforms in finance could introduce and the need for a deeper understanding of AI’s effects on labor markets, inflation, and price-setting
[Source 1: U.S. Securities & Exchange Commission, Source 2: Bank of Canada, Source 3: Bank for International Settlements
Details
Gensler used the film “Her” to illustrate concerns about a few dominant AI platforms in finance, warning that reliance on a small number could lead to systemic risks if these models fail or prompt simultaneous actions. Macklem, speaking at an AI economics conference, noted that while AI might boost long-term productivity, it could also cause short-term inflation due to increased demand and investment. He stressed that understanding AI’s impact on labor markets and price-setting is critical for effective monetary policy. Maijoor emphasized that although some risks such as concentration risk and third-party dependence are not new, AI introduces new interconnectedness and potential for creative fraud, necessitating a fresh approach to risk management.
Japan Eyes New AI Regulations
Japan is considering its first AI regulations, shifting from voluntary guidelines toward measures addressing generative AI risks and potential copyright infringement
Details
Prime Minister Fumio Kishida has initiated discussions on new AI regulations, moving beyond sector-specific rules and voluntary guidelines. Historically, Japan avoided broad AI restrictions to prevent stifling innovation, but global measures including the EU AI Act and the US AI executive order are prompting a reassessment. Kishida emphasizes that upcoming regulations will be innovation-friendly, aiming to make Japan the most AI-friendly country while addressing risks swiftly.
4. Legal Matters
Generative AI in Courtrooms
A recent federal court ruling in Tremblay v. OpenAI suggests that AI-generated prompts and outputs, even from pre-suit investigations, may be discoverable in litigation, raising issues around privilege and evidence preservation
Details
In Tremblay v. OpenAI, authors accused OpenAI of copyright infringement for using their works to train ChatGPT and generating outputs similar to their writings. OpenAI sought discovery of all related ChatGPT prompts and outputs, arguing that privilege was waived when the authors referenced them in their complaint. While a magistrate judge agreed, the district judge limited the waiver to only the prompts and outputs cited in the complaint. This ruling indicates that AI-generated materials may be subject to discovery, impacting how legal practitioners handle AI-related evidence in litigation.
California’s AI Deepfake Crackdown Faces Legal Hurdle
Legal challenges emerge against California’s new laws targeting AI-generated election deepfakes, sparking a debate over free speech and the balance between curbing misinformation and protecting constitutional rights
Details
Governor Gavin Newsom signed three laws to prevent AI-generated false images and videos in political ads close to Election Day. Two of them, including one allowing individuals to sue over election deepfakes, are being challenged in court as unconstitutional restrictions on free speech. An individual who created parody videos of Vice President Kamala Harris has filed a lawsuit, claiming the laws censor free speech and permit legal action over disliked content. The laws also require online platforms to remove deceptive material and mandate disclosure of AI use in altered media.
5. AI Economics
OpenAI Restructures to Attract Investors as Apple Exits Funding Talks
OpenAI plans to become a for-profit benefit corporation, aiming to draw more investors by altering its governance structure, while Apple withdraws from a $6.5B funding round
Details
The restructuring involves removing caps on investor returns to attract more investment, with the non-profit retaining a minority stake. CEO Sam Altman will receive equity for the first time. Meanwhile, Apple is said to have withdrawn from the ongoing $6.5 billion funding round valuing OpenAI over $100 billion. Microsoft and Nvidia remain in talks. These changes coincide with recent leadership departures including the departure of Chief Technology Officer Mira Murati and raise questions about OpenAI’s mission amid governance changes.
AI Adoption Soars Among Small Businesses
98% of small businesses are now using AI-powered tools, with 40% leveraging generative AI to boost productivity and innovation
Details
A survey by the U.S. Chamber of Commerce and Teneo indicates that generative AI adoption among small businesses has nearly doubled since last year. Business owners note that while AI enhances productivity and reduces personnel costs, human oversight remains essential to refine AI-generated outputs. The survey also reveals that 91% believe AI will aid their growth, and 77% intend to adopt emerging technologies including the metaverse.
Generative AI Adoption Among Leading Eurozone Companies
ECB survey reveals 75% of major non-financial firms in the eurozone are using generative AI, primarily to enhance information access and content creation
Details
Conducted between May and June 2024, the survey highlights that most firms adopted generative AI recently, with significant uptake during 2023. Despite widespread adoption, only about 10% of employees use it regularly. Beyond enhancing information access and content creation, companies are applying generative AI to software development and customer engagement. Approximately half view reducing headcount as an important reason for adoption.
6. Research
AI’s Schrodinger’s Memory
New research suggests large language models (LLMs) exhibit a form of memory similar to humans, capable of recalling learned content when prompted, akin to a concept described as Schrodinger’s memory
Details
The researchers from Hong Kong Polytechnic University trained LLMs to memorize thousands of Chinese poems, achieving nearly 100% recall—surpassing average human capabilities. They describe LLMs’ memory as “Schrodinger’s memory,” where information becomes observable only when queried. The study suggests that LLMs dynamically generate outputs based on inputs, drawing parallels between machine and human cognition.
Google’s New Whale Bioacoustics AI Model
Google Research releases a new AI model capable of identifying vocalizations for eight distinct whale species, including the mysterious Biotwang of Bryde’s whales, to enhance marine mammal conservation efforts
Details
Developed with the U.S. National Oceanic and Atmospheric Administration (NOAA), the model processed over 200,000 hours of underwater recordings to classify sounds from humpback, killer, blue, fin, minke, North Atlantic right, and North Pacific right whales. It recognizes multiple call types for two species and converts audio data into spectrogram images to handle the varying frequencies of whale vocalizations. Available on Kaggle Models, the tool aims to help researchers track whale movements and analyze large passive acoustic monitoring datasets more efficiently.
Advancing Global AI Research & Development
U.S. Secretary of State Antony Blinken announces the Global AI Research Agenda and AI in Global Development Playbook, fulfilling mandates from President Biden’s Executive Order to guide future AI research and its use in advancing UN Sustainable Development Goals.
Source: U.S. Department of State
Details
The Global AI Research Agenda (GAIRA) highlights critical international research opportunities and aims to foster a holistic approach to AI development and use, ensuring benefits are shared globally, human rights are protected, and inequalities are reduced. It considers both technical advances and societal impacts, underscoring AI’s potential to advance the UN Sustainable Development Goals (SDGs) and calling for research on AI’s impact on the global labor market. Likewise, the Playbook serves as a roadmap to leverage safe, secure, and trustworthy AI for sustainable development, distilled from consultations with government officials, NGOs, tech firms, and individuals worldwide.
AI Scams Pose Elevated Risk to Investors
A new study by the Canada’s Ontario Securities Commission reveals that investors are more susceptible to AI-enhanced scams in retail investing but finds techniques such as scam exposure and web plugins can help mitigate the risk
Source: Ontario Securities Commission
Details
The report highlights how AI is used to rapidly spread common investment frauds and create deepfakes and voice cloning schemes, amplifying investment frauds. An online simulation showed participants invested 22% more in AI-enhanced scams than conventional ones. Effective countermeasures include the “inoculation” technique—prior exposure to scams—which reduced fraudulent investments by 10%, and a web browser plug-in that flags potential scams, decreasing investments by 31%.