AI is ringing in opportunities for semiconductor companies. Here’s how!

With machines being trained to mimic cognitive functions of the human brain, semiconductor companies have been put on a growth chart which they didn’t have access to in the past even with all the innovations in chip design and next-generation devices that are fabrication enabled. Most AI apps such as virtual assistants rely on hardware for various functions.

Semiconductor companies could get 40-50% of the technology stack

With the creation of advanced Machine Learning algorithms, AI allows us to process huge data sets, and also learn, and improve over a period of time. Deep learning, a kind of ML made a huge leap in 2010s when it enabled generation of quite accurate results with a much wider range of data and the least requirement of data preprocessing by humans.While improving training and references developers often face challenges in storage, memory, networking, and logic. If semiconductor companies provide next gen accelerator architectures, they could enhance computational efficiency.

How AI could drive a big chunk of semiconductor revenues for data centers

The demand for existing chips by semiconductor companies will witness a growth With hardware as the differentiator in AI, but they could also gain by developing workload specific AI accelerators, which are not existing yet. According to the McKinsey report, “AI-related semiconductors will see growth of about 18 percent annually over the next few years—five times greater than the rate for semiconductors used in non-AI applications. By 2025, AI-related semiconductors could account for almost 20 percent of all demand, which would translate into about $67 billion in revenue. Opportunities will emerge at both data centers and the edge. If this growth materializes as expected, semiconductor companies will be positioned to capture more value from the AI technology stack than they have obtained with previous innovations—about 40 to 50 % of the total”.

Data-Center Usage: Cloud-computing data centers use GPUs for almost all training applications. GPUs are poised to be more customized to level up to the demands of DL, especially with ASICs entering the market. CPUs will lose to ASICs as DL based apps come to the fore.

Edge applications: A major chunk of the current edge training happens on PCs and laptops, but more devices may be used for the same purpose in the future. As most edge devices kneel on CPUs or ASICs, by 2025, ASICs are expected to account for 70% of edge inference market while GPUs will account for 20%.

Memory: Memory, especially dynamic random access memory (DRAM)is needed to store data inputs as well as for other tasks during inference and training. AI will be the enabler of opportunities for the memory market as something as small as a model being trained to recognize the image of a flower needs to bank on memory while the model works on the algorithms. AI chip leaders such as Google and Nvidia have adopted high-bandwidth memory (HBM) as the preferred memory solution, although thrice as more than the traditional DRAM— but it shows that customers are willing to pay for expensive AI hardware if they get performance gains.

The McKinsey report states many opportunities but also concludes that ‘ To capture the value they deserve, they’ll need to focus on end to-end solutions for specific industries (also called microvertical solutions), ecosystem development, and innovation that goes far beyond improving compute, memory, and networking technologies.”

Inputs from

How beauty brands are leveraging AI for customer acquisition

The global cosmetics market size has been projected to reach $463.5 billion by 2027. How this fast growing industry is leveraging AI is something we all can learn from. Customer acquisition is a big part of revenue generation but even bigger perhaps, is customer retention. L’oreal got its head in the right place with its AR and AI-powered mobile app StyleMyHair. Besides its other functions, the app points the user to the nearest hair salons where users can get their hair styled immediately. L’Oréal’s skin care at-home assistant called Perso creates personalized skin care formulas using AI. The system has a Breezometer which uses geo-location data to arrive at localized environmental conditions that can affect the skin of the customer. This may include UV index, temperature, pollen, humidity etc. When used on a  regular basis, Perso’s AI platform can not only assess skin conditions but personalize with better precision.

Another success story worth sharing is skincare brand MAELOVE’s use of artificial intelligence to analyse three million plus online product reviews to understand the needs of their customers and to deliver accordingly. Founded by a team of MIT graduates, their success story rides on their use of research for making formula blueprints. Theirs is a “radically affordable” skin care product line wherein everything is priced under $30. The bestseller, though, is the  $28 priced Glow Maker which boasts of an ingredient list quite similar to the award winning product CE Ferulic Serum priced at $166. The success of The Glow Maker is AI-backed as millions of product reviews were analysed to arrive at ingredients which worked and those that didn’t. It is interesting to note then, that The Glow Maker has already had four sellouts and is ready for pre-orders for a fifth time.

Methods at a glance

Product tagging helps in better product discovery. Products that are frequently brought together are flashed to consumers on e-commerce portals, gently nudging them to buy (sometimes at a discounted price). The home page of various portals display top personalized pictures of the products on offer, as per the choices of the customer. Engagement emails are sent out by brands with personalized promotions using data of the customers. When customers abandon online shopping carts, emails are sent with promotion to encourage them to complete their purchases. These emails are often also used from cross promotions.

How beauty brands are leveraging AI for customer acquisition

The global cosmetics market size has been projected to reach $463.5 billion by 2027. How this fast growing industry is leveraging AI is something we all can learn from. Customer acquisition is a big part of revenue generation but even bigger perhaps, is customer retention. L’oreal got its head in the right place with its AR and AI-powered mobile app StyleMyHair. Besides its other functions, the app points the user to the nearest hair salons where users can get their hair styled immediately. L’Oréal’s skin care at-home assistant called Perso creates personalized skin care formulas using AI. The system has a Breezometer which uses geo-location data to arrive at localized environmental conditions that can affect the skin of the customer. This may include UV index, temperature, pollen, humidity etc. When used on a  regular basis, Perso’s AI platform can not only assess skin conditions but personalize with better precision.

Another success story worth sharing is skincare brand MAELOVE’s use of artificial intelligence to analyse three million plus online product reviews to understand the needs of their customers and to deliver accordingly. Founded by a team of MIT graduates, their success story rides on their use of research for making formula blueprints. Theirs is a “radically affordable” skin care product line wherein everything is priced under $30. The bestseller, though, is the  $28 priced Glow Maker which boasts of an ingredient list quite similar to the award winning product CE Ferulic Serum priced at $166. The success of The Glow Maker is AI-backed as millions of product reviews were analysed to arrive at ingredients which worked and those that didn’t. It is interesting to note then, that The Glow Maker has already had four sellouts and is ready for pre-orders for a fifth time.

Methods at a glance

Product tagging helps in better product discovery. Products that are frequently brought together are flashed to consumers on e-commerce portals, gently nudging them to buy (sometimes at a discounted price). The home page of various portals display top personalized pictures of the products on offer, as per the choices of the customer. Engagement emails are sent out by brands with personalized promotions using data of the customers. When customers abandon online shopping carts, emails are sent with promotion to encourage them to complete their purchases. These emails are often also used from cross promotions.

How Leveraging AI Could Make Art Businesses Grow

Acquiring new customers is always a challenge for art dealers and gallerists. Even the Art Basel Report for 2019 indicated the same. What helps overcome this challenge, is knowing the demographics of your audience. With the art market no longer confined to a particular state or city or country, and newly introduced digitization of the trade, the art buyers of yore i.e. males have paved the way for younger enthusiasts who are seen investing in art. Millennials are, in fact, increasingly becoming art buyers and collectors as according to the 2019 Art Basel report they comprise 46% of the high net worth collectors surveyed in Singapore and 39% of the total share in Hong Kong. 69% of millennials purchased fine art and 77% purchased decorative art between 2016 and 2018.

Know thy customer

While customer demographics help in segmenting customers on a general level, psychographics help develop personas by telling customer needs and buying behaviour. Thus psychographics help in building their online personas and accurately predict what makes them convert. A combination of demographics, psychographics, as well as behavioural data for arriving at target groups would best help art sellers.

Make a move!

Another possible approach for gallerists and art sellers could be the use of precision targeting to help reach out to the right target audience among the customer segment that actually converts. To the customer, Precision Targeting gives a feeling that the marketer has crafted a personalized experience for them by reaching out with the right message at the right time. AI data points help in studying the buying habits of the customer for a particular product or service over a period of time. For example, gallerists may hold exhibitions at particular months of the year when customers are more likely to buy art or they may send newsletters announcing new pieces on particular days of the month.

What else to display?

ML zooms in into an artwork for its salient features and compares it with other artworks to find similarities and arrive at artworks which buyers would prefer. Advisors and dealers can know about their clients’ tastes and arrive at specific pieces which might be picked by buyers. Likewise, they can also determine which more artists can they add to their art line-ups.

Authentication and Validation

Besides the conventional analysis of material, authenticators, dealers, and auction houses can use AI-based software to detect the authenticity of an artwork. ML studies the artworks of various artists to know their aesthetic style such as the movement director of their medium (brush, pen etc.), the kind of pressure they exert on their canvas, and the previous works of the same artists to arrive at the authenticity criteria. These software can be deployed by sellers to encourage first-time buyers who otherwise may prefer to buy from particular galleries due to authenticity concerns.

With all these new techniques made available by AI, art sellers are poised to see their customer acquisition go up and have a better run in the market.

How leveraging AI can take the business of art to a whole new dimension

Among its many firsts, AI helped resurrect Picasso’s lost artwork. Sotheby’s, the world’s largest, most trusted and dynamic  marketplace for art too has deep dived into data as is evident from its acquisition of the startup, Thread Genius, a virtual  search engine of sorts which harnesses the power of neural networks. It is capable of finding similar artworks to help  streamline art appraisals. Delhi based Art Gallery Nature Morte held a group show Gradient Descent in 2018 featuring AI  artworks and St. Petersburg’s Dalí Museum used deepfake technology to create a life-sized deepfake of the artist from his old  interviews and uses it to deliver quotes attributed to him.

At Artmarq, as the name suggests, Art Market Data meets Artificial Intelligence. Artmaq works by analysing data from public art sale records. They then track and add more dimensions to data including art deals made online. Artists, Curators, Art Collectors or Art Fair Executives or Online Startup Leaders stand to benefit as they can make informed decisions on the basis of data and analytics. Art consultants and gallerists, and those exploring the commercial side of art can use Artmaq for market research and competition analysis. Custom reports of specific artists or genre of art, or even market segments are made available to make the use of data as versatile as it can be. Artmaq can also serve as a useful tool for art educators and students.

Just like the resurrection of Picasso’s lost artwork, deep learning is used to understand the style of an artwork, and what makes it really stand out. These insights are then used to create a new masterpiece by Get-art.work. Users can get artwork created in the styles of greats such as Kandinsky and Van Gogh using photographs or even inputs provided by them.  

Bulgari too dived into the field and created a gigantic art installation backed by AI in 2021. The installation was inspired by the serpenti symbol. The project was undertaken in collaboration with media artist Refik Anadol to create an immersive, digital artwork using real time AI and scent augmentation. The multi-sensory artwork was exhibited in Milan, Italy and then, there were plans to turn it into an NFT. Take that!

Cattle ID systems are among AI-based apps helping Indian dairy farmers grow: Here’s How!

Those in the dairy industry may be aware of the ghastly practice by Indian farmers of cutting off the ears of the cattle to enable them to identify cases of cattle theft, fraud, or even for purposes of tracking outbreak of diseases. Developed countries such as the UK and the US use advanced systems of cattle identification such as cattle passports which every animal can be identified with. Yet some farmers put numbers on animals for identification purposes. Facial recognition technology could put an end to such practices. Companies such as Mooo-ID and Cainthis are already working in this direction. Moo-ID as the name suggests helps in cattle-identification, and Cainthus uses AI and computer vision with their smart cameras to observe nutritional, behavioural, health and environmental activities that can impact production. This visual information is then turned into actionable insights that enable the farmer to make data-driven decisions to improve farm operations and animal health. Farms such as Maddox Dairy in the USA are already using this tech and feel they can know about the health of all the cows without being physically present at the farm.

Moo-ID, an AI-based livestock identification system on the other hand lets users register cattle against their Aadhar id. Information about the owner and the cattle is digitally stored. This can be later used to verify the cow’s Identity.

Milktech startup MoooFarm works with Microsoft to help Indian dairy farmers tackle their losses. With their services such as Digital Livestock Management, farmers can record and maintain cattle lifelines, manage their  expenses and also get access to predictive analytics for dairy farm management. Their mission is to make farmers prosperous and in that direction help farmers connect with ‘vets at doorstep’ at an affordable pricing, and help in purchase of dairy farming inputs which are again delivered to the doorstep at an affordable pricing. Besides, they also provide credit access to farmers, insurance of cattle et al.

Disease detection is necessary for the farmers to be in control of the health of their cattle as it is an important aspect of the dairy industry. An IoT device used to track health data of cattle is a collar, which is put on the neck of the animals and transfers the collected data which can be analyzed to detect any symptoms of diseases.

How brands are leveraging AI for customer acquisition?

As the fashion industry rapidly adapts to new technology, brands are leveraging AI and ML to reach out to existing customers and attract new customers. Leading fashion brand Tommy Hilfiger turned to AI to improve its designs when it tied up with IBM and Fashion Institute of Technology (FIT) in 2019. Their project Reimagine Retail, was directed towards using AI to map out future industry trends and also in improving the design process. Popular sports fashion brand Nike uses AI to keep the customer happy by personalizing customer experiences and improving engagement. As per Nike 60 % people wear the wrong shoe size and to fix this problem, the folks at Nike created the Nike Fit tool and integrated it with the Nike app where customers will be able to find their right size and even see how the product looks on their feet. This is done using a combination of technologies including Computer vision, Data science, Augmented reality, Recommendation Engines, and Machine Learning. 

Nike also has access to a wide range of customer data from its supply chain, enterprise data, and the app ecosystem. In 2017, the company had announced Nike Direct, a direct-to-consumer sales channel. To attract customers using product recommendations, the company has acquired four data science and analytics firms since 2018. Each of these acquisitions contribute to the ultimate goal of taking a step towards better customer experience. While ‘Invertex’ brought powerful 3d scanning technology that creates accurate models of one’s anatomy to Nike, ‘Zodiac’ projects revenue streams at the individual-customer level as it applies predictive behavioral models and customer analytics to target data, and Celect helps in optimizing inventory by predicting demand for the future by applying ML to the current data. So, by integrating varied tech, Nike uses the huge amount of data at its disposal to create customized recommendations and create demand. If financial figures are anything to go by, Nike Direct sales have shot up from USD 11.7 billion in 2019 to USD 16.3 billion in 2021.

Moving on to the luxury segment, Dior too, has used AI to launch its chatbot or beauty assistant called Dior insider. It chats with customers on the Facebook messengers and helps the customer with what they are looking for.

A new study from Juniper Research found that the global spending by retailers on AI services will reach $12 billion by 2023, up from an estimated $3.6 billion in 2019. Juniper expects over 325,000 retailers to adopt AI technology over the period. The future of the fashion industry clubbed with AI looks promising for sure

Role of AI in OTT platforms

That 98% match to your preferred movie types or TV serial genres on your favourite OTT platform is the handiwork of AI. With on-demand streaming fetching more and more users globally, it is only natural for media companies to look to AI to enhance customer experience. Let us look at how they are doing it.

AI in recommendations

AI-backed recommendation engines gather, collate, and extract user data before filtering it out in the form of recommendations.OTT platforms are some of the biggest users of such engines to push out the best content to their users. The more personalized the content is the more likely it is for the customer to remain loyal to the platform by watching more and more. Netflix Recommendation Engine (NRE) has great accuracy as it  filters content on the basis of an individual’s user profile using a mix of algorithms filtering over 3,000 titles at a go using 1,300 recommendation clusters. It is no wonder then that the market size of the recommendation engines is poised to reach $12.03 billion by 2025.

Role of ads and Metadata

As users of OTT platforms you may have noticed videos on these sites have information about the visuals, the emotions, a synopsis, and the genre of the show or film such as horror, romance, thriller etc. This is the metadata on which AI can assess the scenes in the show or movie to generate teasers automatically. Meanwhile, using the same metadata, advertisers find the ideal spots where they can place the products.

Demand forecasting

User behaviour can also be predicted by OTTs using demand forecasting. They can analyze which genres will resonate well with particular audiences. Demand forecasting can thus help OTT platforms find genres for fresh content, determine what is a good time/month/period of the season to release new content, the preferred languages and so on. Some of the most watched shows on Netflix for example are from other languages such as Squid Game in Korean, Money Heist in Spanish and so on.

As the entertainment industry moves towards wider use of AI, it is the early adapters that benefit the most. Intense competition also warrants constant innovation of AI tech to boost efficiency and keep up with the trend of upward growth.

If you are interested in the metrics of OTT and how AI is driving this craze, write to us at contactus@infiniteanalytics.com and do sign up for our weekly newsletter

Subscribe to our newsletter

* indicates required



   


Meta Releases New Self-Supervised Algorithm data2vec

Meta AI, known earlier as Facebook AI, has launched what it calls as the “first high-performance self-supervised [machine learning] algorithm” called data2vec. data2vec is aimed at achieving self-supervised learning beyond specific use cases. Hitherto, self-supervised models were such that they could solve only a specific problem. A self-supervised language model could not solve a visual problem and a self-supervised visual model could not solve an audio problem. How data2vec will be different is that it will use the same algorithm to solve distinct problems , and move a step forward towards generalized artificial intelligence. A single model can now see, read, and listen, and comprehend rules across all these inputs. According to Meta AI, ‘through self-supervised learning, machines are able to learn about the world just by observing it and then figuring out the structure of images, speech or text.’ This approach is more effective for machines as they can now complete tasks of greater complexity like understanding the text for more and more spoken languages. With data2vec, Meta AI claims to be getting ‘closer to building machines that learn about different aspects of the world around them without having to rely on labeled data.’ We are nearing a future where AI could be able to use videos, audio recording, and articles to learn about even complicated subjects such as a game of chess or soccer, thus making AI more adaptable. Meta AI also claims that data2vec ‘outperformed the previous best single-purpose algorithms for computer vision and speech and it is competitive on NLP tasks’. The main idea behind data2vec is to enable machines to perform unfamiliar tasks as well. This will also bring computers a step closer to a world wherein computers will rely on less and less labeled data to complete their tasks.

The new algorithm works on a teacher network and a student network. The teacher network computes tasks from text, audio, or images and then the same is masked to repeat the process for a student network, which is entasked to predict representations of the full input data, while being given just a part of it. The prediction comes from internal representations of the input data, hence removing the dependence on a single modality.

One can access the open source code here.

If you are looking forward to machines with less reliability on labeled data or want to talk about data2vec contact us at contactus@infinteanalytics.com and subscribe to our newsletter

Subscribe to our newsletter

* indicates required