• A fully remote team of 8 with a strong culture. We built an amazing app and an effective learning model, and developed a work culture that you can see described at the top here and throughout my description of Chicisimo;
  • Why did we built the company? We wanted to build a mechanism to automatically capture what-to-wear behaviour (as Spotify captures your listening behaviour), and automate outfit advice on top of users’ taste profiles. I personally love this type of problems. Whomever owns people’s clothing behaviour, will own their attention;
  • The challenge we most liked. The challenge was very specific in scope, and we loved it: We had to build the interfaces that allow a user to very simply provide an input (a specific outfit need, general taste, the clothes in your closet, etc), WHILE at the same time we were building the infrastructure that understands that input and can respond with the right output. This had to be done under a very disciplined learning scenario: continuously shipping to real people through our apps, and measuring it against retention/conversion metrics;
  • The end result was a digital closet app that allows women to easily digitize most of their clothes in minutes. The closet then tells you how to combine your clothes providing (i) outfit suggestions, and (ii) showing you outfits of real women wearing the same clothes you have in your closet. This was obviously done in many, many iterations;
Chicisimo app.
Working on retention levers, and results of the iterations
Introducing a new functionality, and adoption.
Chicisimo’s tech in your bedroom – an Alexa skill fully shipped.
  • A mobile app women love, with 5M installs from word-of-mouth and non-paid acquisition. We have built and shipped 204 different public releases to the App Store during 5.5 years. Our rating has always been 5 stars or very close to it, depending on the country;

Building the interfaces to capture taste

I’m working on this as you read it.

Our background was in building systems to automatically capture user behaviour. In 2004, we built interfaces (plugins, apps) to capture music listening behaviour (capture playcount events), and analyzed those playcounts, their metadata, and the correlations among the events established by the sequence in which songs were consumed.

We also applied the approach and tech to personal finance in 2007, by analyzing credit card events, their metadata, how all events correlated among themselves and how they related to the traditional profile of a user (with both a b2b and b2c approach – the one that worked for us was the b2b). We had founded Strands in 2004 and had successfully built mechanisms to capture behaviour in a number of segments, as you can read here.

For me, fashion was interesting because automating the capturing of taste is way more complex. There are not interfaces (plugins, systems…) to capture taste data.

So now you have the context of the problem we wanted to solve (described above).

The 2 characteristics of Chicisimo interfaces: as I said above, a big challenge we faced at Chicisimo was designing the interfaces for people to provide input. These interfaces had to be simple and interesting to use, of course, but they had 2 requirements: (i) they had to be able to capture whatever need women had when it comes to deciding their outfit, and (ii) they had to be realistic in scope, in the sense that whatever the interfaces promised, the backend had to be able to respond with the correct output via the app;

We obviously did not fully meet the number 1 requirement at the beginning, but we did build a first iteration that was very compelling, it engaged users, and took as to the next stage. The backend for this 1st iteration was kind of faked but it worked like a charm. The inspiration for this 1st interface came from 2 sources. First, we made a big effort to learn what are the most common outfit needs people have, at a high level (do they need ideas on how to wear specific garments? how to go to places or events?), and how they specifically described those needs in their head. We acquired this learning by placing a fake “outfit search box” in a high traffic fashion blog, and then by analyzing the query data. With the data, we simplified the problem in our head as much as possible. We knew we wanted a tap based interface (as apposed to a text based interface), and thought of slider approach popularized Urbanspoon years ago. With several iterations and lots of user research, we shipped this slider below. People loved it!

  • We obsessed with correctly onboarding users, been doing that all my life. In her first 30 seconds, a user already knew the purpose of the app, she had already used the slider a few times, and discovered at least one relevant idea. With this, the user was retained (we later sophisticated the metrics and the onboarding a lot). The interface above evolved to provide better control (see these 5 seconds), and we eventually killed it because we invented the closet which was a million times more effective at retaining users. The closet could have not been born without the learning brought by this first interface;
  • One of the elements that most helped with the evolution of the interfaces was our ontology: both improving it, and simplifying it. Simplifying the ontology and our understanding of it, provided the team with a clarity of mind that really helped us iterate understanding what we were doing (we discussed about our ontology here);
  • The more successful our interfaces were, the more data our graph received, the faster our model learnt, the more reliable it became. It is interesting that you can’t build the interface without the backend, and the quality of one depends on the quality of the other… so the planning of the iterations and learnings (and fakings!) is pretty important;
  • The learnings provided by search data. The slider you saw above was great, but it was always felt as limited. In order to be effective, it had to be easy to use and therefore provide few options: a small number of colors and garments. It worked extremely well for a subset of clothes in a user closet, but it did not cover many other clothes. The search box gave users the ability to describe their need in a more specific way. There are very few types of what-to-wear needs, one of them is to get ideas for a specific garment. So for this need, what users did was to describe their garment. These descriptors (and the descriptors they chose to omit), were the key learning point for us to be able to build an effective closet: a closet in which you could include your clothes, and match them against outfits with those same clothes;
  • The closet: WIP
Chicisimo has been featured as App of the Day in 140 countries.

An unsupervised learning model

  • Chicisimo apps are built on top an unsupervised learning model that automatically classifies clothes and understands people’s taste. We developed this model by automatically learning from users’ closet & outfit data, their queries and interactions. We had to develop different skills, as the closet was a combination on ontology, taste graph, obsession around interfaces and incentives, the correct data infrastructure to interpret user input, and a strong product culture. All Chicisimo tech is described here.
One of Chicisimo processes. Extracting correlations from outfit elements (and more) to create a learning model.
Fashion lacks a standard to classify clothes or to refer to the variety of concepts that describe products, styles, and personal fashion preferences. Our ontology solves the problem.
The Fashion Taste Graph of a retailer is a brain. Like the brain of the “Chief Stylist” who knows precisely each product and shopper and the retailer editorial line.

If you are interested in what we’ve built at Chicisimo, here you have a few links that will provide a nice overview: