Episode 282
December 9, 2022

Fingers and Scissors: AI Product Photography

AI advancements are changing the landscape of product photography in many industries, including eCommerce. There are many exciting use cases and possibilities and also some snags that the AI world is trying to untangle, such as rights to content, proper execution of image generation, and other issues that could be figured out by the time 2023 rolls around. Seriously? Listen now to hear Sofiia Shvets’ take on this and more!

<iframe height="52px" width="100%" frameborder="no" scrolling="no" seamless src="https://player.simplecast.com/c96c260d-20ec-48bd-bd47-8a3b74e15397?dark=false"></iframe>

this episode sponsored by

Have Your Robots Call My Robots 

  • The power of AI technology is not only about saving time and money but streamlining efficiency, so that teams are more connected and more able to operate at a higher scale
  • Having more tooling that allows a still very small team to be able to operate efficiently with competitive margins is the biggest challenge that we face in eCommerce today
  • We are at the beginning of phase 2 of generative AI where we can actually create content from scratch for multiple industries, including eCommerce
  • AI tech is iterating so fast that within the next month or month and a half, Sofiia predicts we won’t be able to pick out the photos on a website that are AI generated
  • “There will be a future use case that we can't even be able to dream of that is more likely the outcome than not.” - Phillip
  • “There is a study that I read recently that currently around 1% of the content and the Internet is AI-generated. In the next 6-10 years, the prediction is up to 50%.” - Sofiia
  • AI changes the way we think about how consumers are going to interact with content in the future

Associated Links:

Have any questions or comments about the show? Let us know on Futurecommerce.fm, or reach out to us on Twitter, Facebook, Instagram, or LinkedIn. We love hearing from our listeners!

Brian: [00:00:56] Hello and welcome to Feature Commerce, the podcast about the next generation of commerce. I'm Brian.

Phillip: [00:01:02] I'm Phillip and today we are with Sofiia Shvets, the CEO and Founder of Let's Enhance and now Claid AI, who's coming to the show to tell us a little bit about the advancements in generative AI tech for eCommerce. Sofiia, how's it going?

Sofiia: [00:01:16] Thank you guys for having me here. It's been great. November is always a super busy month for all of eCommerce and eCommerce providers as well. So very excited to be here.

Phillip: [00:01:26] Happy to have you. I'm coming to you guys live from my sister's house. If you're watching the video version of the podcast, you can see I'm in a middle schooler's room. So this is that time of year. I'm assuming that your business is quite busy and that you service the eCommerce industry. But tell us a little bit about your journey and what your company does.

Sofiia: [00:01:46] Yeah, so my company is called Let's Enhance.io. So we have two products, Let's Enhance that we started and that's actually, we were not very creative. We just copied the name of the product and named the company the same. And also Claid AI. So we build AI-based tools for eCommerce. We started four years ago with technology that can improve the quality of content when we were just taking small, crappy-looking photos, trying to clean them up, and redraw them in higher quality. And eventually, we started to investigate more and started getting more and more requests. And last year we launched Claid Ai, which is an end to end like API for eCommerce where we can take and not just improve the content, but also can do basically the whole editing workflow. We remove backgrounds, fix light, can take one picture, and create like ten assets for that for different menus. So it's designed to replace very, very boring, routine-ish work that lots of content teams do and do it at scale. So if you have many pictures of different products, our product is designed to save you time and money.

Brian: [00:03:07] Wow. It's so cool. I feel like this is a time saver and a money saver for sure. It also almost feels like an opportunity for reformation of team and de-siloing of different parts of a business. Because some of the stuff that I've seen with your tool feels like it would take multiple departments to get to the outcome that you end up with once you apply Claid to an image or to a campaign. And so I feel like, yes, yes, money saver. Yes, time saver. Also, accelerator of business. Launching new channels, and launching new campaigns can become a lot. So efficient that you could actually rethink how you are structured and what a campaign even means. Do you see the potential here as well, or am I just extrapolating a little too far here?

Sofiia: [00:04:11] No, actually, you're very on spot. So primarily our clients are multi-sellers platforms that get content, get different products for multiple brands and multiple providers. So what happens is that we call it internally unstructured data that some providers send it in Google Docs, some providers send it in zip archives, some providers send it... It's very different formats. Everything is very not consistent. So there are the whole teams that review them manually and try to structure for the specific requirements because lots of platforms know that for their catalog, I don't know, product needs to take 80% of the canvas, needs to be 1600 pixels wide for this menu, but for this preview, it needs to be square because we take this and automatically scrub these photos and use them for Google shopping campaigns. So there are a lot of very concrete requirements which lots of departments spend days... Actually, we did some tests with one of our clients with one of the biggest food delivery companies. They got lots of food photos and on average food appears from 72 hours to one week later, while going through all the review processes, goes through multiple teams, the content team, and some editing teams, and then it appears online through all the verifications. So part of what we do is not just saving money and time, we also help companies to speed up and optimize their processes and also be able to operate more content, and with these types of brands, the same teams can now be twice as efficient because this is delegated.

Phillip: [00:06:10] There's a challenge that a lot of businesses have when trying to scale. You find yourself today if you're launching from zero a brand new brand, there are so many channels to have to publish and there are so many channels that you have to operate in in order to win. The prevailing knowledge today is that direct to consumer is not viable on its own and that you should have a marketplace strategy. You should also have a wholesale and distribution strategy for brick and mortar. The bar has never been higher for brands to be able to be successful in the marketplace. And the challenge is that every single one of these marketplaces, every single one of these channels has their own rules, not just to understand, but to be able to compete and win in those channels. And so [00:07:03] having more tooling that allows a still very small team to be able to operate efficiently with competitive margins, I think is the biggest challenge that we face in eCommerce today. And so having more autonomy in an organization and being able to generate the kind of imagery that you need without having to go through heaps and heaps of compliance or writing briefs and getting time on a team and getting things scheduled and having to go to the creative department and having to get things cleared by brand, these are all impediments to growth in eCommerce right now. [00:07:44] So it's something we talk about all the time, but I feel like just digesting it for our audience to kind of hear what the opportunity is. And this is, by the way, this is like a totally unsponsored episode. {laughter} We're good friends, and I just think that there's a really interesting challenge here, that [00:08:02] there isn't enough tooling in the marketplace today to solve these problems. You have to have 20 tools to do it and very creative people to power it. [00:08:10] Brian, you were going to jump in here.

Brian: [00:08:13] No, I was just laughing because I was like, yes, the 20 tools and several departments and it feels like this sort of cuts through a lot of that very quickly. You start to rethink. My mind's turning on this. You start to rethink what a campaign even means and why would you run things through certain channels if you have the ability to push them out as fast as you would with a tool like this. It almost feels like what Shein did for clothing, Claid could enable from a content generation perspective in some way.

Phillip: [00:08:53] Why don't we start from the beginning and let's talk about how you're finding product market fit in that even just in the last year since we've met and started talking, Sofiia, it sounds like the idea of application of AI to assist already existing assets to AI becoming generative has been a massive shift of understanding of the marketplace. So maybe take us on that journey as you're finding product market fit with Claid AI.

Sofiia: [00:09:22] Yeah, absolutely. So as I mentioned, we started with a very, very general case to take photos and make them higher quality. And that was the first because we were mostly interested in technology and that's how a lot of AI companies start. Something new, some technology appears, and then they try to explore the potential applications. So we started very wide and then when we were exploring this, we understood that the big value of technology and applications of AI is actually narrowing down to specific cases. So that's when we took our existing idea of squeezing the max of the content that it will serve for specific requirements and needs and focused on the industries of marketplaces of eCommerce. So it started adding a lot of additional tools and additional steps of workflows that teams are already doing. But now they can be replicated with all these automatic operations. You just add them one by one and 2 seconds later you have what you need to do. Generative AI is phase two of the whole AI development process, which phase one was you have a lot of content, like input content, and now you do post-production. You added that and you try to update it for your requirements. With generative AI it's like phase two when you can take the content and now you can create absolutely new content because this creation stage was still on photographers or was still on production studios, was still on like you need to hire somebody to do the content for you or even like the creation can be your users take photos with their phones in front of the white wall. It's also the creation stage, but you still need to do it. So when they generate AI appeared, I would say it started with text, it always starts with more like tax networks. And this year was very big. Big brands, like DALL-E starting to explore this art creation and now with stable diffusion, which is a new network that appeared in August. And in one and a half months, 200,000 developers subscribed to start building on that. So I think we're just in the beginning of this wave, but we're in the beginning of a very big shift that we cannot just add content. We can now create it from scratch for multiple industries and brands and eCommerce is definitely one of them.

Phillip: [00:12:05] Can you give us a use case with a particular brand because I think the thing that often is missed in these conversations is the practical application? So we're not just talking about painting us a picture in the style of Salvador Dali, right? This is something that's actually practical and useful for merchandising. So maybe walk us through how this might be used.

Sofiia: [00:12:30] Yeah, absolutely. That's exactly what we are exploring right now for eCommerce and brand case. But one of the examples that have been used is, for example, Heinz Ketchup brand built a campaign of generating posters using AI and the way AI rethinks them, so it can be definitely used by brand campaigns, by some social campaigns because of the way that these networks work. And they might take these very traditional images that we got used to and rephrase them and redraw them in a completely new way. So it's very, very creative. [00:13:13] It can give a lot of ideation and new ideas on how to look at the existing products. So definitely one of the use cases that I see that we can use is creating brand campaigns and creating ads. On more practical cases we can do, it's creating eCommerce images from scratch. When we have, let's say, 10 or 15 photos of the product. And what we are particularly exploring right now is that when we have the same product, it's very hard to create new content all the time. [00:13:49] And as a brand, you need new content all the time for newsletters, there is Black Friday in two days. There's one campaign, there is Christmas, there is, I don't know, summer season, winter season... So you need to reinvent and create new content all the time because the content memory of the internet is very, very short. And even for ad campaigns, banner blindness is a huge problem and everybody's tried to overcome it in some ways. So one of the use cases that we are particularly exploring is that we can take this product, we can feed our system with let's say 10, 15 images of that product to give a lot of like unique details. And then we can create new, absolutely new, images from scratch. We can build stories around our product. I sent a few examples to Phillip, and he will add to the campaign that you can create like take gummies brands, kids' candies, and then you can place them in absolutely different backgrounds. You can change colors, you can change transformations of the product. So it allows you to be very, very creative and create a lot of new unique content using these networks. Definitely, it's still early. So one of the big challenges is to keep the accuracy of the product so that AI does not change the form of the product and keeps it very all the unique details as they should be. But this is something that it's possible to do and this is something that I see as a big opportunity for the constant creation of the content.

Brian: [00:17:10] Content is king. And like you said, the Internet has a very short memory. It needs to be refreshed a lot. I think the toolset has to evolve to meet the market where the market is at. Feels like this is that next iteration. I think you touched on some things that I feel like we could even dive further on and that is things have made some real strides. AI has made some real strides in the past year. And in consumer, I feel especially finally people are starting to catch the vision for what is possible at work. That often happens with tech, new tech. People start to see things in a consumer sense and then they're like, "Oh, wait a minute, I can actually use this in my business. This is really, really cool." And so the DALL-E mini, DALL-E-2. And now you have this tool. You mentioned something about inspiration, and I definitely see a very strong opportunity to use a generative tool to provide creativity and ideas for new campaigns. And you mentioned, like holiday and whether it's seasonal, there are a lot of other things you can build campaigns around. Do you see your tool as it exists today, as more of a campaign creation facilitator, or do you see it actually creating the end product assets that are used as a part of that campaign? Is it end to end, or is it just like certain components?

Sofiia: [00:19:05] That's a very good question, and it depends on how you wrap it. It can serve both goals. In our cases, we are exploring that in the end, you will have something production ready. But the key thing about all the generative AI is that it's not fully end to end. At this stage, it's a copilot. And that's a big thing even in text generation or any generation if you create something new that we still need some human input. Like "How do you want to see the product? What's the story behind that?" Because if the AI can create the visuals, there still needs a story of your brand. What do you want to see? Do you want to build a story that some characters traveling around the world or do you want to create absolutely new? Your brand is, I don't know, what if we take our sneakers and how would somebody like aliens wear them on Mars? So something like this, absolutely different context. But still, the human idea is key to that because we can create a lot of presets, but it doesn't replace creativity. So in our case, we definitely want something that can be used and we start with something simple we take the product, we place it on multiple backgrounds, and that can be used for, let's say, ad campaigns. Like you need ten different banners of your product on different backgrounds. I don't know if you have cosmetic brands, then maybe it makes sense to put it on the bathroom shelf or somewhere like that makes sense for your customer or even something crazy like on the Mars surface or something like this. But it also can be used in the ideation stage. And we actually have just been testing with one brand that we train the system on their product and now we can create absolutely different forms of that product. So you can change colors, You can, I don't know, make this product reflective, you can make it animalistic. And that can be a cool idea that even given to your customers, you can wrap this AI into this one comment line and give it to your customers and ask them like, "Create the craziest version of our product and now go..." Lots of ideas or that's what you can do in the room with your marketing team or your brand team as well.

Phillip: [00:21:51] I have a couple of sort of practical questions on like how much training data is needed to make a convincing enough model within the AI to place your product into it. So I think I'd given you a couple of weeks ago we had a conversation and I said, "If I'm Mondelez and I want to use Claid for putting the Oreo cookie into a scene, I want to show an Oreo going into some milk, what is required to sort of get that model in?" Is that a very laborious process? Is that a specialized set of skills? Who can perform that today and where is it heading?

Sofiia: [00:22:31] So in our case, we have an AI team that currently does all the testing and training, but in the future, we definitely will wrap it into some simple interface that users can do by themselves. The key thing about the data is, it depends on the complexity of the product. If your product has a lot of tiny details, maybe 15 to 20 would be better. If you're something simple like an Oreo cookie, I think we used 7 or 8 photos. You need to give enough context to AI to understand what your product is about if you want to generate completely new product photos. If you want just to replace backgrounds, we already can do it pretty consistently with just one photo. If you have one picture of your product, we use our existing techniques like we can remove the background, we can mask that product, and then we can paint and position it in different environments. So in that case, one picture might be just enough.

Phillip: [00:23:44] It's funny because my immediate objection to this was I have a pretty keen sense of when I can tell that something was generated by AI. I think things have a little bit of an unsettling nature. You shared something on Twitter that went viral of a number of hands, like all shaking each other. It looked very, very unsettling. But then I remember that I had the same objection ten years ago to 3D-modeled product photography. And it seems to be now that 99% of the things that you buy on the Internet, you're not looking at a real package. You're not looking at a real product. You're not looking at real food. It's not photographed. It's 3D modeled. There's not a piece of furniture that you bought in the last five years online that was a real product that was a picture that was taken of it. It's all facsimile all the way down. It's just the tool gets better and better and more convincing to where it's just good enough that we are able to kind of see it for what it is. It's not a 3D model. Now it looks like the product that you want to buy. How far are we away from that in the consumer product space with Claid? And how fast do you think you're iterating to get there or is it pretty convincing already?

Sofiia: [00:25:06] I think we need maybe a month, months and a half. It's moving very fast.

Phillip: [00:25:13] {laughter} Not the answer I thought.

Sofiia: [00:25:15] No, but we can already see consumerization of AI in multiple cases. So there were a lot of popular products that appeared in the last month that can generate AI avatars with your face. So you upload 20 photos of your selfies and now you can be, I dunno, a medieval prince or like an alien or whatever you want to be. But for eCommerce for the first version, definitely. There is already a lot of development. For the one that you mentioned that definitely you can tell that there is some difference. And that's the whole industry is working on improving that. So you can tell there are some artifacts. AI can be... There is some funny missed thing that AI just doesn't like, like fingers. I had cases with six fingers, with seven fingers. Fingers, eyes. It all can be resolved with narrowization of the use case. So if you need to create photos that somebody holds your product with a decent amount of data and training and improving, this will definitely be resolved in a matter of months. It's just everybody... I shared this meme because everybody in AI knows that fingers are the toughest cookie to crack. There are fingers and scissors. We have the funny scissor test internally because I've seen so many forms of scissors that are very unstable forms that are not round or that have a lot of components, like fingers. Yeah, AI can add more, but it will be fixed in the next, I would say, few months for sure. It's just a funny, funny case.

Phillip: [00:27:14] How does that get fixed? I pardon the ignorance. I just don't know much about this, but how does that get fixed over time? Is that just more training data or do we get better at recognizing where the limitations are so we avoid them? Is it that we're able to take the areas that show flaws in the generation and we replace them with real photography and sort of merge the real and the sort of generative together? What's the solve there?

Sofiia: [00:27:43] There are a few methods. As I'm not a researcher, definitely more training data. You literally need to show AI more pictures of fingers. There are also a bunch of methods that are based on the physics of the product. Like one of the say, the 3D models are created to keep the proportions, not to create people with gigantic heads or gigantic hands. And so there are methods to teach AI to understand proportions and draw them naturally. Yeah. And all over the more data you show, the more consistent it is. There are also some I know that we internally use some additional AI network that can estimate the quality of the output and AI teaches the AI that, hey, this is good or this is not good.

Phillip: [00:28:42] Oh, that doesn't sound like a slippery slope at all. That's that sounds... {laughter}

Sofiia: [00:28:47] Yeah, Yeah.

Phillip: [00:28:48] I need an AI just to tell me what to spend money on, and then you can cut me out of the mix here. You guys, you just tell me what I need. Send me stuff when I need it. You don't even need to make photography anymore.

Brian: [00:28:58] This is exactly what's going to happen next, is people are going to train their personal AIs to review AI-generated content that will purchase or not purchase AI-generated products that are print on demands that will be printed by robots and mailed back through automated systems.

Phillip: [00:29:15] You're making a joke, but I think you actually are making a really interesting point here, which is if you need diminishing numbers of photos to have something fairly convincing that can be sort of generated on demand, it really takes testing and user testing and split testing to a whole separate level where it's much more algorithmic and much more determinant based on the person's behavior and who they are and how you learn about them over time. And [00:29:45] maybe there is a future where we have a profile, like a Brian profile or a Phillip profile, or maybe this is where wallets and sort of the democratization of personalized data that allows me to give you information about myself in a much more fair and control-centric way and that I have control over my private data. This is a thing that I can see myself being able to put myself into product photography and see myself try things on in a given ecosystem in apparel, or see myself on a vacation somewhere. These are ways to kind of be able to be much more immersive in the future in an on-demand capacity as opposed to the way we're thinking about it now, which is what is our current use case. But that's not how technology ever works, right? There will be a future use case that we can't even be able to dream of that is more likely the outcome than not. [00:30:47] Sorry, the musing there. Yeah.

Brian: [00:30:50] No, no. People love outcomes. They also love having good choices, like good choices. And there's a lot of like data that people have to sort through to get to choose between all good options. And so I feel like... And back to your point, Phillip,  [00:31:15]the amount of content that we're going to be able to generate as a result of having systems like this is going to require really smart multi-variant testing systems to make sure the right content is displayed at the right time. It changes the way we think about how consumers are going to interact with content in the future. [00:31:39]

Phillip: [00:31:39] And you know what we've always said is that we don't have enough content. We need more content. {laughter} That's what we want. That's what we need.

Sofiia: [00:31:48]  [00:31:48]There is a study that I read recently that currently around 1% of the content and the Internet is AI-generated. In the next 6-10 years, the prediction is up to 50%. [00:32:01] And we are talking about all the content that we consume like news. So like even if you take magazines, text can be generated, covers can be generated, you can put yourself in there like as Phillip described, I want to see myself in this and this hyper-personalization is something that everybody wants, and that's for everybody. The whole industry moves towards that direction. We are already on the path. If you think about ads which people hate, but still, if it's a good ad sometimes you can find something that is interesting for you. So definitely one of the closest cases is we generate ads in other AI that sits instead of inside of this ad system and measures how people react on that how you personally react. Like, "Okay, I generated this photo on the beach, but Brian didn't respond, so let's try another one. Let's try another one." It's like, "Oh, I see that Brian likes this style, like minimalistic. I don't know, dark style. Let's show him more content like this." So with enough tweaks will be almost real time. So I'm sure that it will be like the next few years. {laughter}

Phillip: [00:33:24] It's so weird. I don't know why. All of the ads on the whole of my internet are just fingers and scissors. It's inexplicable. And none of them look quite right. {laughter} That's the future. That's like a nightmare. There's a lot of conversation right now about sort of copyright concerns. I know that that's come up quite a bit. Getty is a very large royalty and licensed image repository that is instrumental in a lot of content production, especially high-end content production in the world. They've sort of created a ruling saying that nothing that's generative can be added to their library at this point. What are some of the thoughts around that? And could that be a barrier to adoption for the largest organizations looking to potentially who could benefit the most from AI?

Sofiia: [00:34:49] I think multiple, all industries, even if there are a lot of claims that AI will kill designers or photographers. Now, as I mentioned, I believe that it can replace a lot of boring, dumb work and help us to be more creative to create more content. It's basically right now you can create a lot of content by yourself without being an artist. And definitely, there are a lot of questions about what content should be used. There was a famous case with the artist called Greg Rutkowski. I'm not sure if I pronounce the surname right, but he's doing this very futuristic-style photos like illustrations of a post-apocalyptic world. And for the last 25 years. So this is his bread and butter. And it appeared that he was very popular. And now everybody can create photos in his style because Midjourney and DALL-E basically were trained on behalf of the internet, but some part of the Internet was his works. And now, like everybody can create photos of his work. So he was very, very unhappy about that, that he wasn't asked any permissions. And now people can copy-paste his content. So definitely there is a lot of hard conversation around that. There is an ongoing lawsuit about some AI rights if you can use content and if you can just use the content without permission. I think, in the future, there will be some regulation for sure that guys and artists like Greg can opt-out. It's like, "I don't want my data to be used in this training set." But Getty blocked the AI-generated content, but then they announced that they still... Their competitor, Shutterstock, announced that "Yeah, we don't allow generated content, but here now we will partner with Openai, so you can create content still on our platform, but we will use this money and donate to creator fund." And we'll still reward artists because that's like fighting the inevitable. Getty blocked, at the same time, there are like ten big stock photos that are just purely AI-generated. Lexicart is one of the examples that it's like fighting something that will happen. It's better to control it and just adapt it to your model.

Phillip: [00:37:34] Oh, that's so interesting. So again, having not enough exposure to this world, there is rather than just saying "Oh, it's going to disrupt Getty as a potential marketplace," it's actually creating marketplaces that never existed before, wherein people may want specifically to go to a source just for generative stock photography. That to me is the surest sign of a technology that is almost at breaking the gravitational pull. We're at escape velocity at this point because new desires and new demands in the marketplace are emerging that you couldn't have anticipated before.

Brian: [00:38:17] And that's going to break how we do this too... Back to the original point, how we go create campaigns... It's going to make us rethink what a campaign even is and why we would do one because the ability to create them and create fresh ideas is literally at your fingertips all the time. It makes me think that there's going to be a number of brands that pop up that are going to be just focused on using tools like this and they'll be the sacrificial lambs that get bought out or the big company comes along and just copies them. They end up not making a ton of money on it, but there's going to be some really foundational brands that are going to come out, use some of these technologies to really pioneer what it looks like to have a brand that leverages these kinds of technologies and grow very, very, very quickly as a result. But not saying that they're going to be successful because I think there needs to be a couple of sacrificial lambs first before we see it truly adopted across the major brands.

Phillip: [00:39:27] Let's actually, Sofiia, I want your perspective on this. What if there's a higher order level, higher order thinking where it's beyond just the content that goes into a website that becomes shoppable in the current shopping paradigm that we use online? But what if it's actually the website itself? The actual interface that we use to purchase itself could be generative to some degree. What would it take to get there? This is thinking to myself, what are ways that we can design? We have a perspective that came out in the Visions Report around creating new modes of content delivery that are purposely a little bit off-beat, maybe sort of anti-designed, specifically meant to evoke some sort of emotional response. It's not optimized for shopping. It's optimized to actually feel something. Where could we be in the assistive nature of an AI helping us design better user experiences and end rather than just the content that goes inside of it?

Sofiia: [00:40:35] Yeah, that's actually a great question, and AI can disrupt actually both ways. Currently, we use a lot of constructor tools that help us create websites with some general UX rules. But I remember your position that now all website looks the same. So what AI can do is definitely create this emotional target that you can experience. You can create some experiences with the product that you want to showcase, add a person there, change it, make the user imagine how this I don't know, I'm just like, you upload the photo of your room, and then you can see like, okay, this whatever product is in my room and how it will look. Something that AR tried to do, but now the barrier to entry is much, much lower. So because we have this input and now we can experience the product and in the future, definitely create stories on the go. There was one case that was very interesting from when tax networks appeared that some users on Reddit replicated the game that you it's an endless game and you were writing what character does. It's something like first hand and like a character goes there and then it created a text blog but was generated...

Phillip: [00:42:17] Oh yeah. The old MUDs of the bulletin board days, the multiuser domains. I am very familiar because I'm an old guy. Yes.

Sofiia: [00:42:27] Now you can show them and you can control firsthand what can happen. So lots of opportunities for storytelling, for immersing this, again, hyper-personalization that is embedded in the website, like experience it and then shop small buy button after in the end.

Brian: [00:42:52] Experiential purchasing. There you go.

Sofiia: [00:42:54] Experiential.

Brian: [00:42:56] Yeah. I feel like there's lots of opportunity. Like Phillip, I think you were kind of going towards well, if you have the opportunity to regenerate front ends, in experiences, we can get even more personal with them. Right now personalization just means surfacing products to people based on their past browsing history. What if personalization actually meant creating experiences that were really interesting? I love anagrams and thinking through how different words could become other words. And I feel like this is almost like you could use this to create anagrams out of websites. There are so many different ways you can configure what's there and you can create different experiences out of the assets that exist. And if you have enough assets, you can create some really cool things. That's really, really cool. I'm so excited about the future of AI as it relates to shopping and content. It feels like the possibilities are almost boundless. Yeah, that's sort of what I'm taking away right now. {laughter}

Phillip: [00:44:19] What's next on the horizon for you, Sofiia? What are ways that people can actually put this to work today and can they get in touch with you to do it?

Sofiia: [00:44:28] I think we will share some contact details. And also, we are currently in closed launch. So we do run tests with some beta partners to showcase what can be done, and at the same time, it helps us to learn what exactly the goals are, like what we need to solve with this technology. As we discussed, can be wrapped in many multiple cases. So if people want to test, they can also fill the type form on our ClaidAI/generationAI. It's free. We do free tests for now. So if you're interested. So for us, you can fill the form and you will be in touch.

Phillip: [00:45:14] Very good. We'll appreciate it. Sofiia Shvets, the Founder and CEO of the Next Generation of Commerce, literally. Thank you so much for coming on to the show. And thank you all for listening to Future Commerce. The best way to predict the future is to build it. Maybe we'll be doing that with AI pretty soon. And from end to end, it sounds like at least if Brian Lange has his way. Thank you all for listening.

Recent Podcasts

Recent episodes

LATEST PODCASTS