Categories
CEO Insights

11 Key Highlights of Global AI Technology Developments in the First Half of 2023

In the first half of 2023, there has been a significant leap forward in AI technology development. This article aims to share 11 key insights derived from our practical experiences in AI, primarily based on iKala's AI-powered products,including the influencer marketing platform, KOL Radar, which holds over 1 million cross-country influencer data worldwide, and our customer data platform, iKala CDP. Moreover, the insights are also drawn from our R&D efforts in enabling clients to implement AI solutions. iKala remains committed to sharing our achievements through platforms such as Hugging Face and public research papers.

1. Does downsizing the AI model (AI brain) still retain the same level of intelligence?

Achieving this task is challenging as the number of parameters in large-scale language models (LLMs) remains a decisive factor in their capabilities. We tested over 40 models, with most being below 30B, and found that reducing the size of the AI brain would hinder its ability to maintain the same level of comprehension. The paper "The False Promise of imitating proprietary LLMs" summarizes this finding. However, who wouldn't desire a model that is small, performs well, and operates quickly? Therefore, the research community continues to invest significant effort in downsizing models. It is worth noting that the AI's ability to write programs seems to have no inherent correlation with model size, which diverges from trends observed in other tasks that require human cognition. Currently, we are unable to explain this phenomenon.

2. The Competitive Race of Exaggerated Claims in Open-Source Language Models Across Various Sectors

Continuing from the previous point, in the past six months, numerous public models have been released. Many companies and developers claim that with just a few hundred (or thousand) dollars, a small amount of training data, and fast training speed, they can achieve performance comparable to GPT-4 with 87% similarity. However, these seemingly remarkable results should be taken as references only, and it is crucial to deploy and reproduce these experiments to validate their authenticity.

3. The trend of AI privatization and customization has already emerged

Customers who have adopted AI at an accelerated pace have widely acknowledged the difficulty of replicating large-scale models such as GPT-4, Claude, Midjourney, PaLM 2, and others. It is also unnecessary to allocate significant resources solely for the purpose of integrating these large models into their own businesses. For the majority of enterprises, a "general-purpose LLM" is not necessary. Instead, they require language models with capabilities tailored to their specific business models.

4. Weaken the overall model or remove specific capabilities

Due to the low feasibility of replicating large models and the urgency for enterprises to adopt AI, the current direction is to train "industry-specific models." The approach involves "directly removing specific capabilities of the language model (e.g., only being able to listen, speak, and read but not write)" or "weakening the entire model (e.g., compromising proficiency in listening, speaking, reading, and writing simultaneously)." This aspect can be referred to in the paper "Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes."

5. Dual-track thinking in enterprise AI implementation

Business owners are currently contemplating how to apply AI to their existing business models and enhance the productivity of their workforce. Organizations are actively breaking down internal workflows, streamlining processes, and automating tasks to improve individual productivity. Simultaneously, they are exploring the potential added value of AI in their business models. Many of our customer companies start by establishing a "data hub" with iKala CDP, accelerating data organization while exploring AI opportunities. Data is essential for AI, and since AI will undoubtedly play a significant role in the future, it is crucial to address important tasks while navigating the exploration phase. As a result, we are witnessing rapid market development in Big Data and Cloud, driven by AI. Unlike past digital transformations that lacked unclear objectives, the current focus is on achieving "intelligence" with well-defined goals that demonstrate the effectiveness of AI. Consequently, business owners are increasingly investing in AI.

6. The trend of platformization for large-scale models has emerged

Continuing from the discussion of large-scale models and referring back to the concept of economies of scale, it is only the Tech Giants who can afford the training and operational costs of these models. By reducing unit service costs, they create barriers for smaller competitors. Consequently, these models are moving towards platformization. They serve as the foundation for private models, allowing external companies and developers to access the outcomes generated by these large models at a low cost. However, access to the details is limited. Currently, there are insufficient incentives for Tech Giants to disclose the specifics of these large models, with Meta being an exception as mentioned in the next point. Unless regulated and intervened by governments, this path remains lengthy and challenging. Ultimately, policymakers may seek a balance between commercial interests and national (or regional) governance in their relationship with Tech Giants. Overall, Tech Giants are unlikely to face significant harm.

7. Meta has already taken the lead in the open-source AI community

With the widespread popularity of LLaMA and SAM (Segment Anything Model), Meta has regained a significant level of influence in the AI landscape. However, what sets Meta apart is that the company does not rely directly on AI for revenue generation. While Google, Amazon, and Microsoft offer AI services on their cloud platforms for enterprise leasing, and OpenAI sells subscriptions for ChatGPT, Meta continues to generate substantial advertising revenue through its immensely powerful community platform and network effects. Therefore, Meta's commitment to openness in AI undoubtedly surpasses that of other Tech Giants.

8. The development of AI models is predominantly led by the English language system

The majority of (open-source) models clearly perform best in English, which widens the technological development gap between Western countries and the rest of the world. It also influences the future usage habits of global users and may even jeopardize the prevalence of certain languages. As a result, governments and large private enterprises in various countries have started developing their own large-scale models to counter this "linguistic hegemony." However, even with significant resources invested in training country-specific models, the key factor remains the demand for their usage. When training AI models reaches a national level, it becomes a matter of marketing and service rather than technical challenges. Therefore, relevant entities in each country should focus their efforts accordingly.

9. The majority of generative AI startups lack a competitive advantage

Even the sustainability of ChatGPT's own business model remains uncertain, let alone other startups that rely solely on OpenAI's API. While companies that have achieved economies of scale in specific business domains may progress at a slower pace, once they clarify how to implement and apply AI, they can surpass these startups by a significant margin. Additionally, Tech Giants continuously reduce the cost of using generative AI through economies of scale, further amplifying the challenges faced by generative AI startups. Therefore, the primary focus of AI still lies in the "application domain." Launching a startup solely based on AI technology is a highly challenging endeavor that often requires immediate capital involvement.

10. Artificial General Intelligence (AGI)

The goal of achieving AGI is still distant, although the unexpected research results of GPT-4 and DeepMind in RL suggest the possibility of machines taking autonomous actions. However, most current efforts to pursue AGI involve high-level combinations of various toolchains with existing LLMs, and the tasks are tackled incrementally. The high costs involved have resulted in the abandonment of many open-source projects, and a fully generalizable solution has not yet emerged. The current progress is more reminiscent of Robotic Process Automation (RPA) rather than AGI.

11. To unleash the value of AI, the most crucial factors are "trust," "user experience," and "business model"

This concludes the three essential elements for AI to expand its presence in every aspect of human society. While the field of Explainable AI (XAI) is experiencing rapid growth, it is still in its early development stages. Large-scale models like GPT-4 continue to be enigmatic to most, and there is much progress to be made in comprehending their decision-making and reasoning processes. User experience presents another significant issue as integrating AI into existing products and services brings forth exciting new opportunities while also posing a challenge to users' established habits. As for the business model, it is as mentioned above.

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: