AI Pulse News Roundup (February 2025 Edition)

Welcome to the February 2025 AI Pulse News Roundup Thread!

This is your space to:

:bulb: Post breaking news about AI research, applications, policies, product launches, ethical debates, and more.
:speech_balloon: Join the conversation by asking questions, sharing insights, and debating the implications with fellow members.
:books: Review the highlights as this thread becomes a snapshot of January’s key moments in AI.

Whether it’s a groundbreaking paper, a policy shift, or an exciting new tool, everything AI is welcome. Let’s keep the momentum going and make this month another great one for AI discussions.

Have a story or topic to share? Drop it below and let’s get started! :rocket:

Happy New Year, and here’s to an exciting February in AI! :point_down:

2 Likes

OpenAI is reportedly in discussions to raise up to $40 billion in a funding round led by SoftBank, potentially valuing the company at $340 billion. wsj.com

Google has announced the release of the Gemini 2.0 Flash AI model for its Gemini app, promising faster responses and improved performance for tasks such as brainstorming, learning, and writing. theverge.com

Krea AI has introduced Krea Chat, a new tool powered by DeepSeek that integrates all Krea features into a chat interface, offering a brand new way of using the platform. x.com

Mistral has unveiled Small 3, a latency-optimized 24B-parameter open-source model released under the Apache 2.0 license. mistral.ai

Sakana AI has released TinySwallow-1.5B, a small-scale Japanese language model trained with a new method called TAID, achieving top performance among similar-sized models. gigazine.net

ElevenLabs has secured $180 million in a Series C funding round, tripling its valuation to $3.3 billion, to advance its AI audio technology and expand its research. reuters.com

AI2 has introduced Tülu 3 405B, an open-source model that surpasses the performance of DeepSeek V3 and GPT-4o on certain benchmarks. allenai.org

3 Likes

Microsoft and OpenAI are investigating whether a group linked to DeepSeek improperly obtained data from OpenAI’s technology. Security researchers detected large-scale data extraction, raising concerns about potential violations of OpenAI’s terms. (Source: Bloomberg)

Microsoft AI CEO Mustafa Suleyman announced that the ‘Think Deeper’ feature is now free for all Copilot users. The feature, powered by OpenAI’s o1 reasoning model, allows Copilot to analyze queries from multiple perspectives and provide detailed responses. (Source: The Verge)

Luma Labs has introduced an ‘Upscale to 4K’ feature for its Dream Machine platform. Users can generate AI-powered videos in lower resolutions and enhance them to 4K with a single tap. (Source: Medium)

The U.S. Navy has banned the use of DeepSeek AI due to security and ethical concerns. Officials worry that the chatbot could expose sensitive user data to foreign entities. (Source: New York Post)

The Bulletin of the Atomic Scientists moved the Doomsday Clock closer to midnight, citing AI-powered military threats. Other factors include nuclear risks from global conflicts and climate change. (Source: Reuters)

Scientists at the Ragon Institute and MIT unveiled MUNIS, an AI tool designed to speed up vaccine development. The model accurately identifies viral targets and outperforms traditional lab methods. (Source: Ragon Institute)

2 Likes

The U.S. Copyright Office reaffirmed that AI-assisted tools do not undermine copyright protection when used to support human creativity. However, works solely generated by AI, such as images created from simple text prompts, remain ineligible for copyright.

The office emphasized that copyright applies when human authors “select and arrange” AI-generated elements, reinforcing its stance from previous guidance. It also rejected additional copyright protections for AI-generated content, citing potential threats to human creators.

The Motion Picture Association welcomed the ruling, highlighting AI’s benefits in post-production tasks like de-aging and object removal. However, the Copyright Office reiterated that AI models trained on copyrighted works without permission remain a key issue for future review. (Source: Variety)

2 Likes

Safety Research Alert

Constitutional Classifiers: Defending against universal jailbreaks (Anthropic)

Anthropic’s recent research introduces “Constitutional Classifiers,” a method designed to defend AI models against universal jailbreaks—inputs crafted to bypass safety measures. By training classifiers on synthetically generated data, the system effectively filters most jailbreak attempts with minimal over-refusals and moderate computational costs. In human red teaming tests, participants were unable to universally jailbreak the model despite extensive efforts. Automated evaluations further confirmed the system’s robustness, with a significant reduction in successful jailbreaks compared to models without such defenses.

Source

2 Likes

Infrastructure Alert

Introducing Data Residency in Europe (OpenAI)

OpenAI users/customers in Europe can now opt to have their API projects and ChatGPT workspaces (Enterprise and Edu only) sandboxed in Europe. This data residency complies with GDPR, DPA and various privacy laws, as well as SOC 2.

Sources: [1]

Research Alert

Open LLMs for Transparent AI in Europe

On February 3, 2025, the OpenEuroLLM project was launched, bringing together 20 leading European research institutions, companies, and EuroHPC centers to develop open-source, multilingual large language models (LLMs) tailored for commercial, industrial, and public services. Coordinated by Jan Hajič of Charles University, Czechia, and co-led by Peter Sarlin of AMD Silo AI, Finland, this consortium aims to enhance Europe’s digital competitiveness and sovereignty by providing transparent and compliant AI technologies. The initiative emphasizes collaboration with open-source communities such as LAION, open-sci, and OpenML to ensure the models are fully open and can be customized for specific industry and public sector needs. The project has been awarded the Strategic Technologies for Europe Platform (STEP) seal and is funded by the European Commission under the Digital Europe Programme.

While it’s welcome news for the development of state of the art AI in Europe, there are question marks over the efficacy of such a setup, considering the sheer number of very different organizations and institutions involved - what can be perceived as “death by committee”.

Sources: [1, 2]

1 Like