Introducing AI Pulse: Your Go-To AI News Update for the Developer Community

Hi OpenAI Developer Community,

We’re excited to introduce AI Pulse—a bi-weekly news update designed specifically for busy developers like you. Stay on top of the most crucial developments in AI, including policy changes, legal updates, technological breakthroughs, economic trends, and the latest research—all in one convenient place.

What You Can Expect:

  • Timely and Curated Updates: We’ll cover the most relevant AI news, tailored to the needs of our developer community.
  • Opportunity for Engagement: Share your views and engage with fellow users on the featured updates.

We Need Your Help!
This is just the beginning, and we want to make sure AI Pulse delivers real value to you. Your feedback is essential to help us refine this update.

  • What do you like about this format?
  • What could be improved?
  • Are there any specific topics or types of content you’d like to see more of?
  • How can we make these updates more useful for your work?

We’re committed to creating something truly valuable together, so don’t hesitate to share your thoughts.

Looking forward to hearing from you!

Best regards,
The AI Pulse Team

@PaulBellow (Editor-in-chief), @vb , @jr.2509

29 Likes

AI Pulse – Edition 1

Table of Contents

1. Government & Policy Initiatives
2. Legal Matters
3. Technology Updates
4. AI Economics
5. Research


1. Government & Policy Initiatives

U.S. AI Safety Institute Collaborates with Anthropic and OpenAI on AI Safety Research

The U.S. Artificial Intelligence Safety Institute, under the Department of Commerce’s National Institute of Standards and Technology (NIST), has announced agreements with Anthropic and OpenAI for formal collaboration on AI safety research, testing, and evaluation. The agreements will allow the Institute to access new models from both companies before and after their public release, providing an opportunity to evaluate model capabilities and associated safety risks as well as to collaborate on methods to mitigate these risks. Link

Controversial California’s AI Safety Bill Clears Legislature

California lawmakers have passed the contentious AI safety bill, SB 1047, which now awaits the signature of California’s Governor Gavin Newsom. The bill mandates safety testing, including third-party audits of safety practices, for advanced AI models such as those that cost over $100 million to develop or require a significant amount of computing power. It also requires AI developers to outline methods for disabling AI models if they malfunction, effectively implementing a kill switch. The bill empowers the state attorney general to sue non-compliant developers, particularly in the event of an ongoing threat, such as AI taking over government systems. OpenAI’s Chief Strategy Officer, Jason Kwon, previously expressed concerns over the new bill, stating that it could slow progress and cause companies to leave the state. He stressed that AI regulations should be left to the federal government, arguing that a federally-driven set of AI policies, rather than a patchwork of state laws, will foster innovation and position the U.S. to lead the development of global standards. Link

Biden-Harris Administration Targets Ineffective AI Chatbots

The Biden-Harris Administration has launched the “Time Is Money” initiative to address corporate practices that waste consumer time, including ineffective AI chatbots. Under the initiative, the Administration is looking to actively address the challenges and limitations of consumer chatbots, such as receiving inaccurate information, and has signaled plans by the Consumer Financial Protection Bureau (CFPB) to release new rules or guidance setting out when the use of automated chatbots or automated artificial intelligence voice recordings is permissible. Link

2. Legal Matters

SAG-AFTRA and Narrativ Establish Ethical AI Agreement

SAG-AFTRA has partnered with Narrativ to offer its members the opportunity to ethically license their digital voice replicas for use in digital audio advertising. The agreement ensures that performers have control over the use of their digital voices, including informed consent, fair compensation, and the ability to set personal ad preferences. Link

3. Technology Updates

OpenAI Launches Fine-Tuning for GPT-4o and Enhances File Search Controls in Assistants API

OpenAI has introduced new features for its developers, including the launch of fine-tuning for GPT-4o, expanding on the previously available GPT-4o-mini fine-tuning. This allows developers to customize the structure and tone of model responses or ensure it follows complex domain-specific instructions more consistently. Through September 23, developers can take advantage of 1M free training tokens per day for GPT-4o fine-tuning and 2M free tokens for GPT-4o-mini fine-tuning. Additionally, OpenAI has improved the File Search controls in its Assistants API to enhance response relevance. Developers can now inspect and re-rank search results, adjust the ranker, and set a relevance score threshold for file chunks used in generating responses. Link

Microsoft Enhances AI Offering with New Phi Model

Microsoft has announced several updates to its AI offerings, including enhancements to the Phi family of models. The Phi-3.5-Mixture of Expert (MoE) model integrates 16 smaller experts into a single system with a total of 42 billion parameters, while however only activating 6.6 billion parameters at any given time. This design offers the computational efficiency and latency of a smaller model while maintaining the high-quality output of a larger one. Additionally, Microsoft has introduced a new 3.8 billion parameter mini model, Phi-3.5-mini, which now supports over 20 languages, expanding its global usability. Link

Anthropic Reveals System Prompts for Claude AI Models and Makes Artifacts Available for All Users

In an effort to enhance transparency, Anthropic has published the system prompts for its latest models Claude 3 Opus, Claude 3.5 Sonnet, and Claude 3 Haiku, marking a first in the industry. The system prompts reveal the models’ operational boundaries and desired traits such as intellectual curiosity and impartial handling of controversial topics. Alongside this release, Anthropic has also expanded the availability of its Artifacts feature to all users of its platform. Artifacts allow for the creation and management of various creative outputs like code snippets and interactive dashboards directly within the app. Link

Google Introduces Multiple Gemini Updates, Including Custom Gems, Imagen 3 and New Experimental Models

Google has rolled out several updates to its Gemini platform. These include the introduction of custom Gems that allow for the creation of personalized AI experts, improvements to its Imagen 3 text-to-image model as well as the release of three experimental models including a new smaller variant, Gemini 1.5 Flash-8B, as well as enhanced Gemini 1.5 Pro and Gemini 1.5 Flash models. Additionally, it has now also made available Structured Outputs. Link

X.ai Releases Grok-2 and Grok-2 Mini

X.ai has released its new models Grok-2 and Grok-2 Mini in beta. Both models come with advanced chat, coding and reasoning capabilities over the preceding Grok-1.5 model. Grok-2 was initially tested on the LMSYS leaderboard under the pseudonym “sus-column-r” where it outperformed Claude 3.5 Sonnet and GPT-4-Turbo at the time of the launch in terms of its overall Elo score. Both models are currently accessible to X premium users and are due to become available via the platform’s enterprise API. Link

4. AI Economics

OpenAI Considers Corporate Restructuring Amid Funding Talks

OpenAI is reportedly in talks to raise billions in a new funding round that could value the company at over $100 billion. The fundraising round, led by venture capital firm Thrive Capital, could see participation from Apple and Nvidia, alongside existing OpenAI partner Microsoft, which already owns a 49% share of OpenAI’s profits after contributing ~ $13 billion to the startup in 2019. To attract more investors, OpenAI is also said to be considering a change in its corporate structure that would involve simplifying the current complex non-profit structure. The current structure, which issues equity via its for-profit subsidiary governed by a non-profit board, has been under scrutiny, with some investors suggesting a shift to a simpler for-profit structure would be more beneficial. Link

Big Tech’s AI Investment Surge Continues

Big Tech companies, including Microsoft, Alphabet, Amazon, and Meta, have collectively increased their AI investment to over $100 billion in the first half of 2024. Despite scepticism from Wall Street regarding the returns on these investments, the firms remain committed to further expanding their AI infrastructure over the next 18 months. Link

5. Research

MIT Releases AI Risk Repository to Guide Safe AI Adoption

Researchers at Massachusetts Institute of Technology (MIT) have released a comprehensive AI Risk Repository to serve as a common frame of reference for understanding and mitigating potential risks associated with AI deployment. The repository categorizes over 700 AI risks into a causal taxonomy that classifies AI risks by its causal factors and a domain taxonomy that categorizes AI risks into seven domains and 23 subdomains. Link

FinalSpark Introduces Biocomputing with Human Neurons

Swiss company FinalSpark is pioneering the field of biocomputing with its “Neuroplatform”, a computer platform powered by human-brain organoids. The platform aims to support AI with 100,000 times less energy than currently required to train state-of-the-art generative AI. The organoids, which are clusters of lab-grown cells, are connected to electrodes that stimulate the neurons within the living sphere and link the organoids to conventional computer networks. The neurons are trained to form new pathways and connections, similar to how a human brain learns. Link

OECD Explores AI Integration in Public Governance

The Organisation for Economic Co-operation and Development (OECD) has released a policy paper titled Governing with Artificial Intelligence: Are Governments Ready? which seeks to provide a roadmap for governments in navigation AI. The paper emphasizes key benefits in AI adoption including productivity enhancement, responsiveness and inclusivity, and accountability and oversight while warning of risks such as bias and fairness, transparency and explainability, and data privacy and security. To mitigate these risks, the OECD recommends developing strategic frameworks, building capacity, setting regulatory standards, and fostering international collaboration. Link

19 Likes

Great job!

We’re looking for help finding the best news to include, so if you’re interested, let us know!

7 Likes

Nice work… this will be a great addition to an already awesome community.

5 Likes

This is a great summarization! Can you shift topic 3 of Tech Updates to the first spot? I found it more relevant.

2 Likes

Really interesting to see edition 1! Thanks for sharing.

Feedback

Maybe it is because I am reading this on a mobile device it would be super useful to have a table of contents at the top of the post so I could see everything by header and jump to the section/s I am interested in easily, without reading the entire thing.

In terms of other topics, would be great to hear from others who have implemented successful and unsuccessful projects, especially utilizing newer features and could showcase their code or approach especially when OpenAI tech is used alongside other solutions to created interesting things

2 Likes

I agree with the idea of adding a table of contents! I’ve also noticed that on mobile devices, it’s difficult to quickly grasp the individual topics.

3 Likes

@PaulBellow @jr.2509 This is an awesome initiative, love it!

I am based in Europe (Sweden) and your AI Pulse is very US-localized. There are lots of things happening in the EU (we even have a little “Swedish AI Mafia” thing happening here in Sweden, which is very exciting).

If you are looking for help with the EU aspects, I would love to help!

5 Likes

My personal opinion is that we should speed up the release of search gpt and ChatGPT 5 products in the next few months of this year. I mean, using products to speak to the market and the government is more convincing. In addition, Google, Microsoft, and Facebook have successively launched new products, making the market situation increasingly unfavorable to open AI. :thinking::thinking::thinking::face_with_monocle::face_with_monocle::face_with_monocle:

1 Like

You know what would be fun? Is if we could also do this as a little podcast :slight_smile:

7 Likes

love that idea.

+1

(more words)

3 Likes

The Information recently reported that several new language models from leading providers are set to be released this fall.

Since it’s already September, the wait isn’t much longer, and it promises to be exciting. These new, advanced models will likely replace certain products and services with their built-in capabilities while also offering new possibilities.

We’ll have to wait just a bit longer to see how quickly the language model AI space is evolving today.

2 Likes

Such a nice article, no one can stop the tsunami of new wave technology, only can improve the monitoring mechanism to calculate hit time. This time is precious for making preparations to meet the impact. The stated information about MIT repository is indeed valuable. “ The repository categorizes over 700 AI risks into a causal taxonomy that classifies AI risks by its causal factors and a domain taxonomy that categorizes AI risks into seven domains and 23 subdomains. “.
This list is to be thought over thoroughly with active collaboration of government, corporates and voluntary organisations. Its a proven wisdom that expected dangers are always less than the actual happenings :slightly_smiling_face:

4 Likes

That’s awsome! Can you add a function to allow search history? People have own memory, so does Chatgpt, but it’s not easy to retrive.

2 Likes

> strong text

Really interesting to see edition 1! Thanks for sharing.

Feedback

Maybe it is because I am reading this on a mobile device it would be super useful to have a table of contents at the top of the post so I could see everything by header and jump to the section/s I am interested in easily, without reading the entire thing.

In

This looks like The AI Pulse newsletter.