Fishing to stay relevant in the exploding field of AI, Meta launched a new organization, the Open Innovation AI Research Communityto encourage what it describes as “transparency, innovation, and collaboration” among AI researchers.
Initially, the focus of the group was the privacy, safety and security of big language models such as ChatGPT OpenAI; provide input for the improvement of the AI model; and sets the agenda for future research. Meta says it expects its own researchers to participate in the organization, but the Open Innovation AI Research Community will always be “member-led”, with Meta’s AI R&D group, Meta AI, serving as a “facilitator”.
“This group will be a community of practice championing large open source foundation models where partners can collaborate and engage with each other, share learnings, and ask questions about how to build responsible and secure foundation models,” Meta wrote in a blog post. “They will also accelerate the training of the next generation of researchers.”
Meta intends to sponsor a series of workshops focused on “critical open research questions” and “develop guidelines for the responsible development and release of open source models.” But details beyond that remain unclear. Meta says the Open Innovation AI Research Community may eventually have a website, social channels for collaboration, and research submissions to academic conferences, but does not commit to any of these.
Members of the Open Innovation AI Research Community may be ready to fund their work. The meta doesn’t indicate that it will set aside capital or count for the group’s efforts – in fairness, perhaps to avoid the perception of undue influence. But it’s a hard sell factor in the high costs associated with AI research.
Frankly, the Open Innovation AI Research Community comes across as a performative of a company that has repeatedly teased AI-related controversies.
Late last year, Meta forced to to withdraw AI demos after writing racist and inaccurate scientific literature. Reports have characterized most of the AI Meta ethics team toothless and those anti-AI-bias tools released as “absolutely insufficient.” Meanwhile, academics have accused Meta exacerbates socioeconomic inequality in its ad serving algorithms and from show bias against black users in its automated moderation system.
Will the Open Innovation AI Research Community change all of this? It seems impossible. Meta encourages “professors at accredited universities” with “AI-relevant experience” to participate, but this author wonders why they would, given the open source machine learning research community not affiliated with any Big Tech companies.
Maybe I’ll be proven wrong. Perhaps the Open Meta Innovation AI Research Community will actually live up to its promise, creating “a positive set of dynamics to drive more robust and representative models.” But I question the sincerity and level of dedication, here, on Meta’s part – especially given how few resources have been poured into the effort from the start.
The deadline to apply for the Open Innovation AI Research Community is September 10. Meta says that it welcomes applicants of “multiple research disciplines” and “technical ability to pursue research,” and more than one participant from the same university may apply.