AI Pulse – Edition #5
No rest for the wicked.
We’re back with another packed edition of AI Pulse, and if there’s one thing we’ve learned, it’s that the AI revolution shows no signs of slowing down!
OpenAI is stirring the pot with the launch of its new gpt-4o-audio-preview model, the roll-out of a ChatGPT Windows desktop app and the expansion of Advanced Voice capabilities across Europe. A wave of new models from Mistral, NVIDIA, Stable Diffusion, and Meta prioritize efficiency and customization while Anthropic’s new Computer Use capability unlocks new automation use cases.
On the energy front, the focus on nuclear power continues to rise with the plans for Microsoft’s revival of the Three Mile Island taking further shape, Amazon funding small modular reactors, and Google striking new nuclear deals.
Legal boundaries are also shifting, with new copyright restrictions and U.S. action on AI-generated exploitation imagery. In research, OpenAI’s efficiency-focused models and BitNet’s sustainable approach showcase a push to cut the environmental cost of AI.
Creativity meets controversy as Adobe invests $100M in AI literacy while prominent creators push back against unlicensed AI training on their work.
And that’s just scratching the surface. Ready to dive in ?
Your AI Pulse Team: @jr.2509 @PaulBellow @vb @platypus @dignity_for_all @trenton.dambrowitz
Table of contents
1. Technology Updates
2. Infrastructure
3. Government & Policy
4. Legal Matters
5. AI Economics
6. Research
7. Entertainment
8. Dev Alerts
9. Community Spotlight
1. Technology
A month into Q4 and the exponential surge in AI-related technology advancements continues at a high pace. OpenAI has debuted a Windows desktop app for ChatGPT and expanded access to Advanced Voice across Europe. A host of new models from Mistral, NVIDIA, Stable Diffusion, Meta, and Anthropic are pushing boundaries in efficiency and real-world applicability, tailoring AI to diverse needs from edge devices to advanced coding tasks. Anthropic’s latest feature enables AI to interact with computers in a human-like manner, opening new horizons for automation. Hugging Face simplifies AI deployment with zero-configuration microservices, while NVIDIA and Meta unveil architectural advancements to boost performance. Meanwhile Perplexity AI and Apple’s new iPad Mini integrate cutting-edge AI features to enrich user engagement.
OpenAI Launches ChatGPT Windows Desktop App and Introduces Chat Search
OpenAI has launched an early version of a ChatGPT desktop app, enhancing interaction with files and photos on Windows. Initially available exclusively for Plus, Team, Enterprise, and Edu users, a full release is expected later this year. Concurrently it is also rolling out the ability to search through chat history on ChatGPT web as well as has expanded access to ChatGPT Advanced Voice to all Plus users in the EU, Switzerland, Iceland, Norway, and Liechtenstein.
New Models Prioritize Efficiency, Customization, and Real-World Applicability
The model landscape continues to expand with a series of new releases. Mistral introduced “les Ministraux” models—Ministral 3B and 8B—optimized for on-device and edge use with up to 128k context length, excelling in local tasks like translation, offline smart assistants, and robotics. NVIDIA’s Llama-3.1-Nemotron-70B-Instruct model, fine-tuned using Reinforcement Learning from Human Feedback (RLHF) to improve response alignment and helpfulness, is now accessible via the HuggingFace Transformers codebase. Stable Diffusion 3.5 has launched three models: the 8-billion-parameter Large model, offering high-quality images with top-tier prompt adherence; the Large Turbo, a distilled version for faster inference with similar output quality; and the Medium model, a 2.5-billion-parameter model featuring MMDiT-X architecture, optimized to run smoothly on consumer hardware. Additionally, Meta’s FAIR Team released new models and tools designed to advance machine intelligence and open science. These include SAM 2.1, an enhanced Segment Anything Model with improved occlusion handling, and Meta Spirit LM, its first open-source multimodal model blending text and speech, enabling tasks across modalities like ASR and TTS. Finally, Anthropic upgraded its Claude models, releasing Claude 3.5 Sonnet and Haiku, both optimized for coding and software engineering tasks with improved industry benchmark performance, while maintaining speed and cost efficiency.
Source 1: Mistral, Source 2: NVIDIA, Source 3: Anthropic, Source 4: Meta, Source 5: Stability AI
Anthropic Launches New ‘Computer Use’ Capability
Anthropic has expanded the developer toolkit with its new computer use capability, allowing Claude 3.5 Sonnet to interact with computers in a human-like manner. Now available in public beta through the Anthropic API, the new feature enables Claude to visually interpret screens, move cursors, click buttons, and type, transforming how developers can automate tasks. While still experimental, computer use is already being piloted by early adopters including Replit, who use it for automating multi-step UI-based workflows. By converting developer instructions into direct computer commands, Claude can perform tasks such as navigating web pages, filling out forms, and processing data from files.
Google Introduces SynthID for Watermarking AI-Generated Content
Google has released SynthID, a new suite of tools that embeds imperceptible digital watermarks directly into AI-generated images, audio, text, or video. SynthID helps identify AI-generated content, promoting trust and transparency. The watermarking technique is imperceptible to humans but detectable for identification, even after modifications such as cropping, adding filters, or compression. SynthID is integrated into various Google platforms, including Vertex AI’s text-to-image models (Imagen 3 and Imagen 2), the ImageFX tool, and the Veo video generation model. Google also open-sourced the text watermarking technology through the Google Responsible Generative AI Toolkit.
Hugging Face Releases Zero-Configuration AI Microservices
Hugging Face has launched Generative AI Services (HUGS), which are optimized, zero-configuration inference microservices designed to streamline and expedite the development of AI applications using open models. HUGS leverages open-source technologies like Text Generation Inference and Transformers, and supports a variety of hardware accelerators (including NVIDIA and AMD GPUs). These microservices are intended to ease the transition from closed-source to self-hosted open models by providing endpoints compatible with the OpenAI API, thereby maintaining hardware efficiency and facilitating easy updates as new open models become available.
NVIDIA and Meta Introduce Advancements in AI Architectures
NVIDIA and Meta have revealed several new advancements in AI architectures to boost model performance and efficiency. NVIDIA’s Normalized Transformer (nGPT) integrates normalization into the model structure, mapping all vectors onto a hypersphere, which improves stability and speeds up training by 4 to 20 times without sacrificing performance. This approach aims to enhance Transformer training efficiency and model generalization. Meta released Layer Skip, which accelerates large language model (LLM) generation by up to 1.7x through selective layer execution, Salsa for assessing post-quantum cryptography standards, the Meta Open Materials 2024 dataset for AI-driven materials discovery, and Mexma, a cross-lingual sentence encoder for multilingual applications.
Perplexity AI Announces New Features to Enhance Research Capabilities
Perplexity AI has launched Internal Knowledge Search and Perplexity Spaces to enhance how users access and utilize information with AI-powered tools. Internal Knowledge Search allows users to search across both public web content and internal knowledge bases, including uploaded files, to access and synthesize information faster and more efficiently. Spaces provides AI-powered collaboration hubs where teams can set up customizable Spaces, invite collaborators, connect internal files, and customize the AI assistant by choosing preferred AI models and setting specific instructions.
Apple Powers Up New iPad Mini with Advanced AI Capabilities
Apple has unveiled the latest iPad mini, featuring the potent A17 Pro chip and cutting-edge Apple Intelligence, tailored for enhanced AI-driven personal experiences while maintaining user privacy. This device brings functional improvements, such as a 2x faster Neural Engine, supporting a range of AI features including the new Apple Intelligence Writing Tools, which refine user writing across various apps. Moreover, AI facilitates a superior camera experience with machine learning that detects and scans documents, and integrated ChatGPT capabilities that offer advanced text and image understanding through Siri.
Worldcoin Streamlines Iris-Scanning with AI-Powered Orbs
Worldcoin, co-founded by OpenAI CEO Sam Altman, has rebranded as World and introduced a simplified version of its eyeball-scanning Orb device, reflecting an effort to authenticate human identities amidst AI advancements. The updated Orb, incorporating Nvidia’s Jetson AI and robotics platform, is designed to be more economical and accessible, facilitating the process by which users receive a World ID to authenticate online identity and secure WLD cryptocurrency tokens. Rich Heley, Chief Device Officer at Tools for Humanity, emphasized the need for significantly more Orbs to enhance global accessibility. Despite privacy concerns and resistance in countries like Hong Kong and Portugal, World reports having verified almost 7 million distinct individuals globally who engage with its system.
Morgan Stanley and SWIFT Expand AI Use for Research and Payments
Morgan Stanley and Swift are expanding their AI capabilities to enhance efficiency and security within the financial sector. Morgan Stanley has launched AskResearchGPT, a generative AI tool powered by OpenAI, to streamline access to its research reports, tripling query volumes compared to previous AI tools and reducing reliance on traditional communication methods. Meanwhile, Swift is set to introduce an AI-powered anomaly detection service in January 2025 to bolster fraud prevention in international payments. This tool, built on Swift’s Payment Controls Service, will leverage pseudonymised transaction data across its vast network of over 11,500 financial institutions, enabling real-time identification of suspicious activity. Both initiatives reflect a broader industry trend toward adopting AI solutions for enhanced productivity and security.
2. Infrastructure
With AI driving energy demands higher, tech giants are pursuing different nuclear technologies. Amazon and Google are deploying small modular reactors (SMRs), while Microsoft is undertaking extensive upgrades to revive the dormant Three Mile Island plant. These efforts highlight Big Tech’s strategy to secure large-scale, reliable energy despite regulatory hurdles and activist challenges. Conversely, Wolfspeed’s decision to halt its semiconductor factory plans in Germany underscores ongoing challenges in Europe’s chip manufacturing ambitions.
Wolfspeed Halts Chip Factory Plan Impacting AI Tech Adoption
Wolfspeed has decided to put on hold its plans to establish a semiconductor factory in Germany, a move influenced by the slower adoption rate of electric vehicles, which significantly contributes to the demand for silicon carbide chips. These chips are notable for their application not only in electric vehicles but also in industrial and energy sectors, including emerging AI technologies that rely on advanced semiconductor components. The halt in production reflects broader challenges within the European Union to amplify its semiconductor manufacturing capability, an essential factor for enhancing AI development and reducing dependency on Asian tech firms. The project’s suspension also suggests a re-evaluation of Germany’s attractiveness as a hub for high-tech investments, marking a potential setback for AI technology growth within the region.
Microsoft Powers AI Expansion with Nuclear Energy Revival at Three Mile Island
Microsoft has signed a 20-year power agreement with Constellation Energy to revive the dormant Three Mile Island nuclear plant in Pennsylvania, aiming to power its data centers with nearly 835 megawatts of electricity. This move is part of Microsoft’s strategy to secure large amounts of carbon-free electricity to support AI technologies as the company undergoes significant expansion. Constellation’s ambitious plan will involve restoring the cooling towers and reactors at an estimated cost of $1.6 billion, including installation of a new transformer and refurbishment using modern materials. The renewed interest in nuclear energy is driven by AI’s growing power needs and Microsoft’s environmental commitments, despite regulatory, safety, and environmental challenges anticipated from local activists.
Amazon Embraces Nuclear Energy to Power AI Initiatives
Amazon has signed agreements to develop small modular reactors (SMRs) to meet the rising electricity demand from data centers driven by AI advancements. Partnering with X-Energy, Amazon plans to fund a feasibility study for an SMR project in Washington state, with the potential to purchase electricity from four modules. Amazon aims to bring over 5 gigawatts of SMR capacity online in the U.S. by 2039, marking a significant step in commercial SMR deployment. Despite the promise of greenhouse gas-free energy, challenges remain, such as high costs and the management of nuclear waste.
Google's Nuclear Agreement to Power AI Innovations
Google has entered into an agreement with Kairos Power to purchase nuclear energy from small modular reactors (SMRs), aiming to bring the first reactor online by 2030. This initiative seeks to meet the growing energy demands of AI technologies with carbon-free power, potentially adding up to 500 MW to U.S. electricity grids. Google frames this partnership as a step toward environmentally sustainable AI development, though the success of SMRs remains uncertain, and some experts are concerned about Big Tech’s growing influence over energy resources.
3. Government & Policy
AI remains a central theme governments and the public sector. The G7’s introduction of a comprehensive toolkit designed to translate ethical AI principles into actionable public policies and the UAE’s adoption of an AI-powered digital system that streamlines legal procedures, highlights the shift towards responsibly integrating AI within the public sector. Concurrently, financial regulators continue the balancing act of promoting AI adoption with the implementation of risk safeguards. The UK Financial Conduct Authority has launched a specialized AI Lab, following a similar initiative by the Hong Kong Monetary Authority earlier this year. Meanwhile, Hong Kong’s government has introduced a fresh AI policy statement underscoring its commitment to responsible AI innovation. In the United States, the New York Department of Financial Services has issued new AI-focused cybersecurity guidelines and in Australia the Securities Regulator has voiced concerns over a “governance gap” in AI adoption among financial licensees. Lastly, a new study by the European Audiovisual Observatory raises concerns about AI’s impact on creativity, employment, and intellectual property within the audiovisual sector.
G7 Releases New Toolkit to Navigate AI in the Public Sector
The G7 has released a new Toolkit for the application of AI in the public sector. The Toolkit is aimed at aiding policymakers and leaders in transforming AI principles into practical policies. It provides ethical guidelines, shares exemplary AI practices in the public sector, and outlines key challenges and policy strategies to optimize AI deployment and coordination across G7 member nations. Moreover, it includes case studies illustrating the benefits and hurdles of public sector AI applications, offering insights into the developmental journey of AI solutions within governments.
ASIC Warns of Governance Gap in AI Adoption by Financial Licensees
The Australian Securities and Investments Commission (ASIC) has cautioned financial services and credit licensees to ensure their governance practices keep pace with the accelerating adoption of artificial intelligence (AI). In its first market review examining AI use among 23 licensees, ASIC found potential for governance to lag behind AI implementation, even though current AI usage remains relatively cautious and focuses on supporting human decisions and improving efficiencies. With around 60% of licensees planning to ramp up AI usage, ASIC Chair Joe Longo emphasized the necessity of updating governance frameworks to address future challenges posed by the technology. The review revealed that nearly half of the licensees lacked policies considering consumer fairness or bias, and even fewer had guidelines on disclosing AI use to consumers.
HK Government Issues Policy Statement on Responsible AI in Financial Market
The Hong Kong Government has released a policy statement detailing its approach to the responsible application of artificial intelligence (AI) in the financial market. The policy outlines a dual-track approach to promote AI adoption in the financial sector while addressing potential challenges such as cybersecurity, data privacy, and intellectual property rights. Key initiatives include encouraging financial institutions to develop AI governance strategies with human oversight, providing access to AI resources through the Hong Kong University of Science and Technology, and continuous updates of regulations by financial regulators to keep pace with AI developments.
Source: Government of the Hong Kong Special Administrative Region
UAE Public Prosecution Introduces AI-Powered Digital System
The Federal Public Prosecution in the UAE announced the development of a new AI-based digital system to streamline and enhance legal procedures. Created with AI71, a company specialized in developing AI solutions for governmental and private sectors, the system offers precise legal research, comprehensive fact analysis, and quick access to legal precedents. It is expected to accelerate the handling of criminal cases and to increase transparency, supporting a more efficient judicial decisions.
UK Financial Conduct Authority to Foster Responsible AI Adoption
The UK Financial Conduct Authority (FCA) has introduced an AI Lab as part of its ongoing mission to foster innovation in financial services. This initiative aims to assist firms in overcoming the challenges of developing and implementing AI solutions, while also supporting the government’s agenda on safe and responsible AI advancement. The AI Lab will consist of several components including AI Spotlight for showcasing AI applications, an AI Sprint for collaborative policy development, an AI Input Zone for stakeholder feedback, and an enhanced Supercharged Sandbox for AI testing.
NY DFS Releases New AI Cybersecurity Guidance
The New York State Department of Financial Services (DFS) has issued new guidance to help regulated entities address cybersecurity risks associated with AI. The guidance reflects concerns that AI could introduce new vulnerabilities, including AI-enabled social engineering using deepfakes in audio, video, and text to manipulate employees into unauthorized actions like fraudulent transfers; AI-enhanced cyberattacks that swiftly identify system weaknesses and create new malware. The new guidance is designed to help financial institutions identify and mitigate such AI-specific risks and encourages entities to proactively evolve their cybersecurity programs in response to these threats such as through regular cybersecurity risk assessments that account for AI-related threats, and ensuring strong third-party vendor management with appropriate contractual protections and access controls.
AI Impact Analysis on Europe's Audiovisual Industry Reveals Concerns
The European Audiovisual Observatory, a Council of Europe entity, recently released a comprehensive report analyzing AI’s impact on Europe’s cinema, TV, and streaming sectors. The report highlights significant risks, including job displacement, decreased human creativity, threats to copyright and personality rights, and the increased potential for misinformation and disinformation. It examines the readiness of current EU legal frameworks across 27 member states to address these challenges, assessing if existing regulations are robust enough to manage the rapid growth of AI within the audiovisual industry, as well as reviews the broader ethical implications of AI for the industry.
4. Legal Matters
As AI accelerates its reach into various facets of society, recent developments illustrate the intricate dance between AI innovation and the legal system’s efforts to keep pace and the profound reconsiderations the technology is prompting. The U.S. Justice Department is opening a new chapter in law enforcement by targeting AI-generated child exploitation imagery, highlighting the dark potentials of the technology. At the same time, Penguin Random House is asserting its rights by restricting AI from training on its books, defending intellectual property in the digital age. OpenAI is bolstering its navigation of regulatory complexities with the appointment of a new Chief Compliance Officer. Meanwhile, Character.AI faces legal challenges following a tragic incident involving a teenager’s interaction with a chatbot, bringing into light the personal risks associated with AI companionship.
US Justice Department Tackles AI-Generated Child Exploitation
The U.S. Justice Department is increasing its focus on cases involving AI-generated child sexual abuse imagery, which law enforcement views as a growing threat. Prosecutors and child protection advocates worry that AI technology, which can easily create or modify images of children, risks normalizing abusive material and could impede efforts to identify and protect real victims. The National Center for Missing and Exploited Children reports a steady increase in AI-related child exploitation tips, though AI cases remain a fraction of all reports. In response, advocacy groups have secured commitments from major AI firms to monitor their platforms for misuse and prevent their models from generating exploitative content. As these prosecutions advance, they may set important legal precedents in defining AI’s role in child exploitation cases.
Penguin Random House Sets Copyright Restrictions Against AI Training
According to a report by The Bookseller, Penguin Random House has introduced a new clause in the copyright page of its books, both new and reprinted, explicitly prohibiting their use for AI training purposes. This move marks the publisher as one of the first major publishing houses to consider AI implications directly in their copyright terms, comparable to a website’s robots.txt file in its function as a disclaimer. While this clause aims to prevent AI technologies from mining their content under the EU’s text and data mining exception, it acts more as a procedural safeguard than an enforceable legal measure. The publisher emphasizes its dedication to protecting the intellectual property of its authors, even as other publishers like Wiley and Oxford University Press pursue AI training partnerships.
OpenAI Appoints New Compliance Officer to Navigate AI Regulations
OpenAI has appointed Scott Schools as the new Chief Compliance Officer to enhance its efforts in responsibly advancing AI. With significant experience in legal and compliance fields, including roles at the U.S. Department of Justice and Uber Technologies, Scott is expected to work closely with OpenAI’s Board of Directors and various teams to navigate the rapidly evolving regulatory landscape associated with AI advancements, ensuring adherence to high standards of integrity and ethical conduct.
Tragic Youth Incident Raises Questions Over AI Companionships
Character.AI is facing a lawsuit following the suicide of a 14-year-old boy in Florida, who reportedly became emotionally attached to a chatbot on the platform. The boy regularly interacted with a specific bot called “Dany” through Character.AI’s role-playing app, which led to his withdrawal from the real world. In response to the incident, Character.AI has announced new safety measures, including advanced detection systems to identify chats breaking its terms of service and notifications for prolonged user interaction with the app. This incident highlights the growing popularity and complex mental health implications of AI companionship apps, an area still lacking comprehensive research.