Google says Gemini, launching today inside the Bard chatbot, is its “most capable” AI model ever. It was trained on video, images, and audio as well as text.
The transformer architecture GPT is based on came from Google. I’m sure the delay has more to do with Google trying to mitigate liability issues that arise with large scale general public usage, and letting OpenAI “test the waters” first.
Quite possible. Whatever the case, they apparently saw no pressure to innovate. It implies that tech development is being slowed down by the Big Tech monopolies.
It just seems that Google should have been able to move faster. Yes, they did publish a lot of important stuff, but seeing the splash that came from Stability and OpenAI, they seem to have done so little with it. What their researchers published was important but I can’t help thinking, that a public university would have disseminated such research more openly and widely. Well, I may be wrong. I don’t have inside knowledge.
The transformer architecture GPT is based on came from Google. I’m sure the delay has more to do with Google trying to mitigate liability issues that arise with large scale general public usage, and letting OpenAI “test the waters” first.
Quite possible. Whatever the case, they apparently saw no pressure to innovate. It implies that tech development is being slowed down by the Big Tech monopolies.
lack of innovation on what exactly? They’re all exploring new things if you search for it.
It just seems that Google should have been able to move faster. Yes, they did publish a lot of important stuff, but seeing the splash that came from Stability and OpenAI, they seem to have done so little with it. What their researchers published was important but I can’t help thinking, that a public university would have disseminated such research more openly and widely. Well, I may be wrong. I don’t have inside knowledge.