8 Jul
2019
Written by
Sara Jabbari
Duration
x
min
For DECATHLON, sport is a serious business. As are images. This is reflected in its Digital Asset Management platform (DAM), which has more than 2 million assets - media that must be documented in a qualitative and productive way.
Media, especially images, play a fundamental role since a product without images simply has no place on our e-commerce site. In addition, while about 80 people put 90% of the media online, nearly 20,000 people visit PixL - the name given to our Digital Asset Management (DAM) platform - each year to find a visual. This can be for a wide variety of purposes: external or internal communication, online and offline devices. About 2,000 new media files are added every day to the DAM, which already contains more than 2 million assets. This concerns only the "master" assets, to which multiple technical variations must be added for each of them. These figures can make you dizzy, but it should be noted that DECATHLON covers about a hundred sports and is located in 52 countries.
By reading the 150 support tickets issued each month, we identified two key issues. First, the time it takes to file images is far too long and obviously weighs on the daily lives of the employees who publish this media. Then, at the other end of the chain, all users complain about searches that are often too unsuccessful. Not surprisingly, they expect PixL to perform as well as Google and this is not the case....
With a lot of automation. Before, we proceeded in two steps: first, the extraction of the product tree from the Product Information Management (PIM); then, its import into PixL, Wedia's enterprise Digital Asset Management (DAM) platform. With this process, it was up to the user to search for metadata within this tree structure to enrich the documentation of the media. Tedious and time-consuming. We therefore worked on a much more fluid synchronization between the PIM and the DAM using APIs. Now, all the user has to do is enter the product code to automatically upload about fifteen metadata. The user just has to validate these suggestions. We quickly saw huge gains in team productivity, metadata quality and de facto adoption of PixL.
Yes, but it wasn't enough yet. To make a search more efficient, we took the time to analyze the requests. For example, we noticed that a search on football was bringing up a lot of product images, while users were first expecting images related to the world of football. With these findings in mind, we have simplified the metadata to better target autosuggestions (editor's note: the terms suggested by the engine to complete a request). With this too, we’ve seen a jump in performance. The success of the work undertaken on these two subjects, and the performance in upload and search, can be seen in the tickets filed to support: we have gone from 150 per month to just three!
Yes, and this even resulted this year in a first Proof of Concept (POC) created with Wedia in the Azure cloud. At the origin of this project, there is a fundamental reflection. With the synchronization between the PIM and the DAM, and the efforts of standardizing the tree structure of our offer, we’ve improved performance of product images, pouring them into the DAM as well as finding them. But what about contextualized images, those that illustrate a use before matching a product model? We asked the following questions: can Artificial Intelligence (AI) recognize a bicycle and its associated use? Can it differentiate between a mountain bike (ATV) and a city bike? Identify a specific model? With the help of the Wedia team, we worked on the top 10 universe products such as bicycles, underwater and camping.
We can say that we were positively surprised. While the AI cannot yet identify a specific product - as we suspected - it is able to summarize an image. And quite well because it works by associations. If it identifies a bike and a mountain, it infers that it is an ATV. Better yet, it can generate a description like "man next to a tent preparing food". Such enrichment can significantly improve the search and reuse of images.
Although we still need to specify the large-scale ROI of a project, we will spend the first set of AI services in production during the summer. The results are sufficiently convincing to include the use of AI in our roadmap. Especially since we know how to guarantee the performance of the AI over time, as the product catalogue evolves. Indeed, Wedia has not only set up a neural network, but also interfaces to equip AI learning. In other words, throughout the content lifecycle, administrators will be able to simply validate (or reject) the detections made by AI and thus teach it to assimilate new objects, new scenes and ultimately new products.
There is no shortage of ideas. It should be noted, for example, that DECATHLON no longer uses models in its photos; in most cases, it is our employees who are being portrayed. An approach that is consistent with our strategy as collaborator-ambassadors. But, of course, if these employees leave the company, it must be possible to delete the images that include them. AI, based on the portraits of employees, could help to manage this type of legal constraint.
We will work on the performance of the images. How do images impact sales? How many images are needed to convert a sale? What are the most efficient types of images? This involves correlating the data between the analytics of our e-commerce platform and those of the DAM. As for the AI POC, this project should give rise to a great deal of co-creation work between the Wedia and DECATHLON teams.