Loyalty as currency: #blockchain meets @taylorswift13 meets @BurgerKing #loyalty

Loyalty is really about customer control. It’s about crafting, controlling and defining the conversation. The most control of customer loyalty the customer actually has is by leaving and going to another brand.

Loyalty plus Currency is about controlling the customer conversation, not just about whether you can interact in certain ways with the brand but with the addition of how the customer can buy with you, that’s where there’s power.

Taylor Swift and Ticketmaster

In the last few days Taylor Swift has come under fire from some quarters about rewarding loyal fans with access to tickets to concerts. Plain and simple to put the tickets into the hands of fans, not the ticket touts or bots.

The first stage of loyalty, measure the feedback from the customer. Rewards are based on interaction with the brand. With no way of measuring the conversation (whether that’s by a loyalty card at the point of sale, social media or coupons doesn’t really matter, what matters is that it’s traceable and measurable).

I think the criticism from the media and other bands is too harsh, I see where the angle is in all of this. If customers get annoyed with the brand (Taylor Swift) when it’s outside of the brand’s control, in this case touts and bots, then it harms the brand and causes long term value damage between the customer and the brand.

The partnership with Ticketmaster changes that, the vendor does the monitoring and it’s case of tying up the customer to the loyalty (social media) and a determining factor on whether that customer is loyal or not. If the scoring is good then you can book a ticket, if not then you move down the priority queue. Basically if you can show loyalty to the brand, the brand will be loyal back.

It’s not perfect but it’s an improvement.

There is nothing new here. It’s just that it’s Taylor Swift so it’ll come under the media microscope. Personally, I think it’s a perfect move under the circumstances. Any artist wants to perform to true fans of their work, not those merely entitled by the size of their wallets.

The Tout Problem

In days gone by counterfeit concert tickets were easy got. I never knew they existed until I went to see my first proper gig in 1986 when I saw Level 42 at the Manchester Apollo (Mike Lindup lost his voice that night but it left it’s mark that I’m still a musician thirty years later).

So, these touts were hanging around the front doors selling dodgy tickets, no one bats an eyelid. Some chance it…. I have my ticket and I’m not letting go of it.

How touts operate now in the internet age, buying up loads of tickets and selling them at crazy markup. Bots make the whole process worse by purchasing at far faster speeds than any human can. The fans lose out, the artist isn’t happy.

The issue is that the currency is the same over whether you’re a tout or the most devoted fan of an artist. It’s sterling, or the US Dollar or whatever you’re paying in. Brands don’t control the global currency markets.

Solving the Artist & Customer Purchase Issue

Firstly you need to control the ticket creation, verification and authenticity of the tickets for a concert.

Dare I say it, I think blockchain might be the answer. The ledger would act as a historic and signed list of tickets generated. On concert day and you get your ticket scanned the confirmation system would lookup on the ledger for confirmed blockchain keys, if you’ve got a confirmed key then all’s good. If it’s a bad match and there’s nothing in the chain, well it certainly wasn’t generated in this ticket run for this gig, back to the car park with you.

If you keep the blockchain ledger locally (ie within the venue on concert day) then you’re reducing the round trip from scan to confirmation. I’ve worked with mobile scanning ticket confirmation systems that work over the air to internet servers, they’re slow and connections break frequently. Local servers, reduce complexity and increase speed.

So where do Taylor and a Whopper collide?

If you can control the loyalty and who gets what, whether it’s a supermarket like Walmart or Tesco or a global artist like Taylor Swift, that’s one thing. To control what the customer buys with is something else.

Interestingly Burger King in Russia are tying up the customer loyalty from the opposite side, the currency. They’re trailing a WhopperCoin which is a Bitcoin/Etherium like token. These tokens can be bought, sold and traded like bitcoins and can exchange between the brand (Burger King), any of BK’s partners and even better customers can sell currency between themselves. It has value.

Loyalty card points have value but it’s usually fractional and in some places come under banking and finance rules. If you’re running your own loyalty scheme with points it might be worth checking…. you’re effectively introducing liquidity into a market.

So, WhopperCoin for currency with limited use to loyal customers. And a ledger based control system for ticket/transaction authenticity.

Taylor Swift merges with Burger King, kind of….

Not literally but the concepts could. What if we’re to say that Taylor Swift fans can earn fractions of TaylorCoins for social media support, blog posts, full youtube views of videos and so on. These coins can be sent, received and traded between each other and also used to buy blockchain enabled concert tickets.

At this point, as I see it, if a tout or a bot wants to purchase Taylor’s tickets for a show they have to be in TaylorCoins and anyone converting huge amounts of dollars into TaylorCoins would send off alarms in the system. When the brand has control of anomaly detection at this scale it means the brand can act by either declining the transaction or other means.

The tout at this point will stick out like a big red flag in a very strong wind. When you control the currency, you control the brand. Touts can be turned away and early in the process. At this point the only way to create tickets would be to create fake ones. And as they are in the blockchain they are easily authenticated against all the other tickets in the ledger.

Ticketing is Big Money

Tours usually break even, unless you’re the Rolling Stones. And that’s why the artist/tout relationship has never been good and it never will be. One side is gaining huge volumes of money over the artist’s reputation and brand.

I’m skimming the surface of a bigger idea here I think but on paper the artist controlling the ticket ledger with blockchain and also controlling the currency of how a customer interacts with the artist provides two key steps to reducing the possibility of fraud, counterfeit goods and preventing real fans from getting access to their idols. Win, win, win and win all around.

Give it five to fifteen years….. everyone will be going to concerts this way. Perhaps.




The Gig Economy Part 2 – No, I’m not doing a drinks delivery startup….

My blog post yesterday was outlining my thought processes of an alcohol delivery service. It’s obviously hit a delighted nerve in the readership (thank you, both of you).

Yesterday I received quite a few messages from folk suggesting I should do such a thing. “That’s a great idea Jase, you should do it!”, “We had this very problem over the weekend!”, “Your delivery charge is too low, I’d pay £3 to get it to my door!” and other communications ended with exclamation marks.

I’m not doing it. There are plenty out there doing such a thing, I named one yesterday, Hungry House, if you’re in the UK. If you want to do it, you have my blessing, use the original post as a starting point.

Winner on branding though…. Saucey, just make it as chic as you can. Bawse as a Service.

The Gig Economy, doing initial calculations as a #startup – #gigeconomy #uber #deliveroo #justeat

I rarely have dreams, but I do get words in my head when I wake up. Though, I don’t know why I woke up with the initial thought of, “Alcohol delivered to your door”. Everything is great at work, my life is good and I’m not on the lookout for a bottle of anything. It was an idea that just entered my head and when that happens I note it down and have a look at it.

So, let’s tear it to bits, Buckfast on Demand.

The Idea

Connecting off sales to customers. That’s all it’s about. I could call it the Uber of Booze or the Just Eat of Binge Drinking* or even the Hungry House of Liver Damage, but right now it’s just about getting something from a retailer to a consumer. It’s just delivery.

So “as a customer I want to order a couple of bottles of Jacob’s Creek CabSav and have it delivered within the hour.

* Interestingly, Just Eat dropped their alcohol delivery service. And if I was an over excited entrepreneur then I’d be asking one simple question, “WHY?!

It’s All About the Numbers

Once you’ve figured out some key numbers then you can look at whether to proceed. I need a calculation to figure out how many drivers I’ll need to service an area of n population.

Where I live has a population of about 12,000 people. Off the top of my head I can think of four off sales sites that can supply. Notice I’m keeping away from the how-do-we-keep-the-stock-items-up-to-date-on-the-app argument, that’s only a discussion once I’m past this point, is it really worth doing?

I’m estimating that 2% of the local population will use the service twice a week. The peak will be two days of the week, Friday and Saturday over a six hour period 5pm-11pm. Let’s assume that 80% of the order volume will be on those two days. I’m trying to find the number of delivery drivers I’ll need to service the local area.

12,000 x 2% = 240 x 2 = 480 x 80% = 384 / 12 = 32 orders per hour on peak time.

With an average trip time of 20 minutes per trip and we are looking to fulfil orders within the hour I need 10 drivers in theory. Now that doesn’t account for multiple orders originating from the same retailer. Pick up time is reduced, in theory I could save 20% of the time. Also order patterns are never uniform, they may bunch up right after work, between 5:30pm-6:30pm, or just before closing at 10:45pm.

I’m going to settle on 8 delivery drivers.

To Employ or Self Employ that is the question?

This is where things get sticky. Especially if you’re a Guardian reader. You want to pay the workers fairly but that does come at a cost. Right now my mini operation covers one town. Settling on a minimum wage rate to start off with let’s have a look at the numbers.

First assumption, the workers are over 25, so the minimum wage is £7.50 an hour. it’s going to be a part time gig as my peak times only cover the two days. These bits would need work and refining. Second assumption, employment with this company would have the expectation that this is a second job.

The calculation I’m using is minimum wage multiplied by the number of contract hours, multiplied by number of drivers (8) and doubled for employer contribution costs and so on.

7.50 x 16 = 120 x 8 x 2 = £1,920 a week

Now that’s revenue my service needs to earn just to pay the delivery workers, that doesn’t take into account my costs, marketing, hosting, up front development costs, payment gateway percentages and so on.

Can We Get the Revenue to Balance Up?

We’ve already calculated that 480 orders a week is a working average based on 2% of the population. Assuming that the order total is £16.90 + £1 local delivery, that gives a per order total of £17.90 giving us a total retailer revenue value of £8,592 a week. We, though, are nowhere near out of the woods yet.

I’ll be taking a percentage off the retailer, that’s my fee for handling this whole pickup/delivery operation. Ballpark figure, 7%, once again this could change especially if it’s wipes out the profit margin on the retailer’s side.

£8,592 x 7% = £601.44

So on paper the 8 delivery driver plan is out of the window. Now I could be crafty and only use 18-20 year old drivers and reduce the minimum wage down from £7.50/hour to £5.60/hour.

5.60 x 16 = 89.60 x 8 x 2 = £1433.60 a week

A decent reduction but still nowhere near what I’d need for a break even. And the thought of “if we raise enough runway” hasn’t entered my head yet. I could have all the runway I wanted and still burn through it at a rate of knots.

Now for the Flipside

Now then, that’s if I was employing them. If they were self employed then I can avoid all that and pay per trip. So we need to look at the average sale again.

I said £17.90 for my two bottles of CabSav plus delivery in the local area. Once I take the mythical 7% I’ve got £1.25 revenue per sale. I agree to pay the driver 65% of the revenue or 81p – now it doesn’t look worth it at all but the more deliveries you can do the more you’ll make. Now it becomes a sport to fulfil as many orders as you can. And if you’re really smart you could make over £300 in two days just on the peak orders.

For me as the entrepreneur then the self employed driver model just wins hands down. They’re making some money and I’m making some money (currently 43p an order). As a model it’s wide of the mark.

The model is too basic

Right now everything is based on averages. These are baseline assumptions now I need to look variations.

Seasonal – Christmas, New Year, Wakes, Births, First Communions, Confirmations and right down to “it’s wine o’clock” – you’d expect peaks of order volume and you’d have to adjust manpower to suit.

Event Driven – Any sporting event…. get the beers in. Done.

Order Type – So far I’ve worked the model off one drink, wine and that’s a low order retail volume. What if a customer orders six bottles of vodka instead? £12 a bottle is a £72 + £1 order (£5.11 revenue with £3.32 to the driver and £1.78 to the startup). You may want to start an Ambulance as a Service at the same time if customers are ordering six bottles of vodka regularly.

Ultimately I’m curious on this baseline 2% figure which I outlined at the start. What if it was 3% of the population of 12,000 ordering?

12,000 x 3% = 360 x 2 = 720

Your order volume increases 50% to 720 orders a week and theoretical retailer order book from £8,592 to £12,888, with your 7% becomes £902.16 for the week.

Scaling Up

To make it you need to operate in as many cities as possible. Hundreds of them, with as larger populations as possible. Rolling out in small areas is usually not representative of how the model will behave. Where I live is certainly not the target market I think, even the next major city wouldn’t have the volume.

So, even though I had no notion of doing such a business, I’d pass on it. Too variable to my risk aversion. Though it’s been an interesting exercise on the numbers. And I’m not saying this is right, it’s very open to interpretation and would need another four or five iterations and kicking about a bit before thinking it was worth pursuing.

Further Reading

If you’re interesting in the modelling aspects which I’ve lightly scraped up above then it’s worth looking at John Adam’s book, “X and the City” which models various city scenarios, some sensible and some not. Here’s an Amazon link to it.

If you’re really interesting in City Modelling then check out Witan which is Mastodon C’s city modelling platform. (Not a paid endorsement by the way).

Tell Me, What Did I Miss?

So, I’m not a genius, I’m not a maths whizz. This is a mix of simple numbers, common sense and a calculator. So if you think there’s anything that I’m way off the mark with then I’m all ears, feel free to leave me a comment. I’m here to learn from you just as much as you’re reading this to learn from me.

If we can learn from each other then perhaps we can improve things all round for the better.

….So how I will I get my beer?

I’ll drive to the shop, it’s just down the road.





The Summer Reading List – feat: @tarah @holdenkarau @HarvardBiz @mattwridley

With two weeks in August I’ve learned some new things. The exchange rate will remain against us but it doesn’t change our resolve, we’ll jump on aeroplanes and go to Spain, we just don’t return home with the donkey and the sombrero now. And yes, Ryanair purposely do hard landings to save on tyre wear and shave turnaround times.

The UK could learn a thing or two on how to charge for public transport. Buses and trams are cheap and people use them. Alicante town and Altea are lovely.

Benidorm is what you make it, it’s not all the mad drinking that the UK media play out. During the high season mobility scooters are a lot less common, in October it’s mobility gridlock.

Finally Belfast International’s international arrivals could do with a lick of paint and keep immigration on the same level (ie, the ground floor). Just saying, it’s depressingly grey to come back to. Heck knows what out of country visitors make of it.

Aside from all that it’s a good time for me to catch up on reading as I don’t get a huge amount of time. So here’s what was in the bookshelf, in the carry on bag and in my shoulder bag. Never be without a book…..

The Evolution of Everything: How Small Changes Transform Our World (Matt Ridley)

A surprise find in a small newspaper/bookshop in Benidorm. The book is broken up in the different areas of science, philosophy, business, technology, economics and so on. And it’s a great read, plenty of new things to learn that I wasn’t aware of. It’s not a technology book but there are some very interesting points to learn from.

Around about 23 people came up with the idea of the lightbulb, during the same period of time as Eddison did. So how to does a company/person claim more patents on “inventing” something when the idea is usually shared?

Find it on Amazon UK

HBR 10 Must Reads 2017 (Various)

I only ever find HBR books in airports, I only bought it for one article in reality about the ownership and curation of Artificial Intelligence models. The other articles are great too.

Find it on Amazon UK

A Truck Full of Money (Tracy Kidder)

The story of Paul English who was one of the founders of Kayak. It’s a read about English, not about Kayak though that does feature in and out of the book. It’s a good grounding in his thought process, which can be all over the shop (so not just me then). Sometimes the writing tends to go on a bit, I think it could have been shorter.

Find it on Amazon UK

Women In Tech (Tarah Wheeler)

I bought this for my not-so-wee-one but it’s taken permanent residence in the living room table for everyone to read. While Tarah has written and curated a brilliant book on women in tech the information is really a must read for anyone wanting to be in tech. Like I said in a previous post, I wish I had this book thirty years ago.

Find it on Amazon UK

High Performance Spark (Holden Karau and Rachel Warren)

I’m blessed, I get to do some interesting Spark work at Mastodon C but finding good reading material on the subject can be hard. The general rule of thumb is if Holden has been involved then I read it.

The is about getting the most out of Spark from SparkQL, ML and how to get the best performance out of RDD’s. The code is in Scala as you’d expect but that shouldn’t be a worry if you use Python, Clojure or Java. You’ll figure it out, that’s what you’re paid to do.

Find it on Amazon UK

“Women in Tech” by @tarah, possibly the best text on #working in #tech.

Gushing about books is not something I do that often, well possibly apart from my own, but I’ve been gushing about Women In Tech by Tarah Wheeler Van Vlack. Everything I should have known about the industry, even 30 years ago, is in this book.

“Readers will learn: the best format for tech resumes; how to ace a tech interview; the secrets of salary negotiation; the perks of both contracting (W-9) and salaried full-time work; the secrets of mentorship; and how to start your own company.”

I bought it for someone else in the house, they very nearly didn’t get it. I think I need to by another copy.



Sometimes your best #software #engineering decisions are the quick ones.

Imagine the scene

The walk to gate 81 at Stansted Airport is a long one, over the walk way. Those that know me I’m not great on my feet as I once was. So while concentrating on putting one foot in front of the other one of my shoes totally gave out.

(Yes that’s a picture of a squirrel and that’s a book on yield management calculations, go figure). 

So what to do?

Well there are shops selling gaffer tape, sellotape, string, superglue. I’m not for turning around and walking back as I’m halfway to the gate. The noise of this clod hopping thing is very loud that I’m getting odd looks…..

Time for quick solution thinking. What have I got to hand and what can I possibly to with it? Literally in a split second the question, “How do I keep this sole up with the rest of the shoe?”. I could only think of one solution. Take the lace out and re-tie the lace under the shoe to hold it together, kind of.

It worked, it was a bit loose but it kept together so I could walk and not be the noisy centre of attention.

There are days startups and software are the same, backed into a corner with little on hand to help you and you just have to make a decision to get things working. It happens and sharpening those skills is a good thing to have.

The shoes are no more but the memory in a blog post lives on.

Thinking about segment latency in streaming apps. #Kafka #Kinesis #Onyx

It’s fair to say that the route of building an app and getting it out there is kind of gone. The old adage of “ideas are plentiful but it’s all about the execution” are valid. The execution aspect can be taken another level though, assume you have a competitor and it’s going to decided on one metric, time.

Time of Execution is the difference between making the sale or the customer going somewhere else.

The more streaming applications I work with the more fanatical I’ve become about reducing latency. With one basic question:

How can I make this end-to-end application faster? 

Say, for example, we receive messages via HTTP (a web endpoint) via REST or what have you and that’s sent to some form of persistence at the other side. There’s a start point, some actions and an end goal. I can write this down in a diagram easy enough….

  • Client sends message to endpoint address.
  • Producer sends message to topic.
  • Consumer processes the message.
  • It’s persisted (that could be to a S3 bucket or a database for example).

From left to right everything has a time implication. I’ve put some basic millisecond times in my diagram, merely guesses.

  • Client sends message to endpoint address. (200ms)
  • Producer sends message to topic. (10ms)
  • Consumer processes the message. (1500ms)
  • It’s persisted (that could be to a S3 bucket or a database for example). (2500ms)

So from start to end we’re estimating the time to complete the entire process is 4210ms or just over four seconds. Now this is probably a worst case scenario number, things may work in our favour and the times are much faster. Log it and make a record of it, review the min/max/average times.

Perhaps if I put it in other terms, assume you lose $5 revenue for every lost transaction and you’re losing 20 a day due to the speed of response. That’s $35,600 a year…. some things will just not do.

What are the things I can change?

So what are the things that can be changed? Well in the instance above not a lot. If this was a Kafka setup I’d be using the Confluent REST API interface for my endpoint so nothing can be done there.

Topic management is done by Kafka itself, there are a few things we can do here in the tuning. Such as throwing as much RAM on the box as possible, reducing any form or disk write I/O (disk writes will slow things down).

The consumer is the one thing we have an amount of control over as there’s a good chance it’s been coded up by someone, regardless of whether it’s in house or outsourced. Know thy code, test the routine in isolation. Nice thing about functional languages like Clojure is I can test functions outside of frameworks if they are coded right. It’s data in and data out.

Persistence is an interesting one. It’s rarely in your control. If it’s a database system then, like the Kafka settings, you might have some leverage on the settings but that’s about it. When you get into cloud based storage like Amazon S3 then you are at the mercy of connection, bandwidth and the service at the end. You have very little control of what happens. The end point is usually where the most amount of latency will occur.

With AWS Kinesis you don’t have that kind of luxury, it’s pretty much set for you and that’s that. You can increase the shard count and expose more consumers in your 5 consumers per shard/per second/per 1mb of data but you’re scaling up and costing more in the long run. If you want total millisecond control, then it’s going to be Kafka I’d be going for.

That Was Too Simple…

Consumers are expected to do things from the simple like passing through a message into storage, like the previous example, and some things can be more complex. Perhaps cleaning a transformation in the consumer and an API call to a third party API to get an id from some of the message data. More milliseconds to think about and some interesting thoughts on the process.

Here’s my new process model.

We can assume the HTTP REST to topic remains the same, the consumer is doing the work here.

  • Receive the message from the topic.
  • Consumer starts processing.
  • Do transformation 1, this might add a unique uuid and time to show our processing block.
  • In parallel an API to get ids from a third party.
  • Combine the results from the transform and getids into the outgoing map.
  • Persist the results as previously done.

When functions split out (as they can do in the Onyx framework for example) the maximum latency within the graph workflow is going to be the node that takes the longest time to complete. In our case here it’s the getids function.

Let’s put some timings down and see things look.

I’ve amended the consumer, that’s where the changes have happened. So there’s 750ms from the inbound message being picked up, 200ms for the transform and 2500ms for the third party API call. Our 4210ms process now becomes 200 + 10 + 750 + 2500 + 2500 = 5960ms.

We can tune one thing, the transform function, but there’s not a lot we can do with the third party API call. Once again, like the persistence, this one is really out of our control as there’s bandwidth, connection and someone else’s server to take into account.

The Kafka Log Will Just Catch It All

Yes it will and you hope that the consumers will catch up while there’s some gaps in the message delivery coming in. What I’d like all in all is to know those consumers are processing as fast as they can.

It’s a fine mix of message size, message time to live (TTL) and consumer throughput. I’ve seen first hand consumers fail on a Friday and every broker has filled up and died by the Monday. (no 24/7 support, so not my problem) 🙂

Enough disk space to hand the worse case scenarios is vital. Speaking of which….

Expect the Worse Case Scenario

When using third party services, including AWS or any other cloud provider. Work on the rule that theses services do go down. What’s the backup plan? Do you requeue the message and go around again? What should happen to failed messages mid consumer, return to the queue or dump to a failed message topic?

Whenever there’s a third party involved you have to plan for the worst. Even services that I’ve written myself or have been involved in I still ask the question, “what happens if this goes down?”, and while the team might look at me with ten heads (remotely) because it’s never gone down…. well it doesn’t mean it never will. And the day it does my consumers won’t really know about it, they’ll just fail repeatedly.

You Can Control Your Latency

There are things you can control and things you can’t. From the outset think about the things you can control, think about how to measure each step in the process, is there a bottleneck, is it code that could be refactored? Does the docker container increase any latency over the network, what about your Marathon/Mesos deploys. The list is long and not everything needs dealing with, some things will be more critical than others.

One thing I’ve learned is every streaming application is different. You can learn from your previous gains and losses but it’s still a case of starting on paper and seeing what can be shaved off where and what’s not in your control.

Ultimately it’s all fun and it’s all learning.


With no in house dev team your startup is dead. #startups #tech #development #software

The rise of the devless startup has been around for a while, echoes of “we can outsource it and save $n” are always bandied around at pitches, meetings or the casual coffee with other founders.

Here’s my stark warning: no in house dev team, your startup will die far quicker than the rest.  If you are a founder who can code full systems, front and back end, apps and API’s then you are in a very strong position.

The Changing Landscape

The last couple of years has seen a sharp increase of new startups. According to the Kauffman Startup Index report the figure stands currently at 310 new entrepreneurs per 100,000 population. That doesn’t sound like a lot but that equates to 500,000 new treps each month.  If only I knew that the churn rate was.

What hasn’t changed much is the workflow in some areas of idea creation to funding. It’s still the same trodden path with the same numbers, “VC’s play the 1 in 10”, one gold star while the other nine either break even. Perhaps that number needs addressing a little more realistically, 1 in 100 for example?

And when there’s a huge amount of money being pumped in to ideas that aren’t going to turn into much, then expect the correction to happen. Stock markets are emotionally driven, see a dip and lots will follow, including investment money.

You need an edge, you need an advantage and you need to ruthlessly (but legally) exploit it for profit.

No in house dev team, no chance….

With the number of startups in existence it’s been a great time to be a software developer with 3+ years experience. You could hop from place to place and take a decent salary, notice I didn’t say anything about stock options. Guess what, probably worthless unless you were first hire or tech co founder.

If you don’t have an in house developer team, or even one lead developer, then make that your priority to find one. You need not care where they are in the world or if they are remote, just find a good one and pay them. Contractors are fine but only once you have dev number one in place.

Over the last couple of days I’ve been watching news reports in to various fields of commerce and service. We can measure sentiment, extract topics and tags from vast troves of data. Startups with no in house development team cannot react in the short time frames that are now expected.

Imagine the scenario, a news report starts doing the rounds at 8:30am with a graph showing a huge decline in an area your startup is working on. There’s an edge staring you in the face. If you email/phone your outsourced development company you’ve already lost to a competitor (and if you don’t think you have a competitor there’s another reality check to do), “we’ll be able to do that on Friday for you”.

What you need is: the war room, how do you exploit this need?, find a solution, do it, have the press release done by 11am to everyone you know via every channel you can find. If you can’t do that then there’s a good chance you’re missing 90% of the opportunities passing you by, even worse the competition will be exploiting them against you.

You basically need to be a mix of Pablo Escobar and Bobby Axelrod, but legal.

Monitor every piece of data in your field.

Monitor everything you can get your hands on, every social channel, news feed, RSS feed, basically anything. Automate it, pick out specific phrases, use sentiment analysis to see what the audience and markets are thinking.

While the huge talk up on Big Data and the opportunities (well most of them vanished in 2012) still goes on, it’s the narrow viewed data that you really need. The ones that are going to give you the alpha, the one piece of data that gives you the edge over the competition. It’s the deep dive of what you know to pull out that one pearl that will turn it around, when you find it you still need the developer team to implement as quick as they can. Not as quick as a third party can, they will never share your concerns, fire or need.

It Does Not Take A Ton of #AI or #MachineLearning to make #customers delighted. #delight101

Image Credit: TibetandTaylor via Creative Commons

Reality Check, Your Customers Probably Don’t Care About AI

Can we just be honest a second, most AI or Machine Learning tools are going to be overkill for most small business. The amount of marketing saying that it’s going to increase ROI and all that, well it’s starting to sound as snake oil as the social media snake oil of 2011.

The only time customers care about AI is when it’s being used against them, for your gain. Back to the creepy line again….

It’s not about VR goggles to view your products (two words: Second Life). It’s not about virtual currency, it’s definitely not about blockchain. It’s not about Hadoop, or Spark, or Kafka or TensorFlow. It’s only about connection and service. Delight your customer and the tools don’t really matter.

Delight 101

It’s easy really, if you are a retailer or ecommerce company, even better. Know your customer’s birthday. Simple.

Step 1 – Once a week, do the birthday run.

Go to your CRM or customer database every Monday morning, first thing, before start of business and find out which customers have their birthday that week. Make a list, save it to Excel…. whatever.

Step 2 – Send them something

Give them a voucher, make sure the barcode or QR code (yeah I know, I said it) is unique to the birthday boy or girl. A 10% off ain’t gonna cut the deal, it’s gotta be real good, a free product, free consulting for an hour or anything else you feel like.

Delivery? Twitter message, Instagram, Snapchat, Facebook, Linkedin, postal service, SMS message? On a cake with those little silver balls….

As long as you can trace back who came back delighted. In fact, you don’t even need the barcode. They’ll be delighted.

Step 3 – Do Nothing else

That’s it. Just do it out of good will, nothing else. I’ve been watching too many Gary Vaynerchuk videos, it’s starting to show, I think that’s a good thing.

The Times You Do Need AI, ML and BigData?

When you have a million customers, forty million transactions and tens of thousands of unique products, that’s when you need those tools (or if you’re making self driving cars, then you definitely need AI/ML). For the rest of us, we can get away with simple databases and CRM’s for the time being.

It’s not difficult, you just have to commit doing it every week.


My friends at Airpos are funding raising on Crowdcube. #pointofsale @airpos #startups #crowdcube

A scalable, secure and mature software-as-a-service platform, AirPOS enables 100’s of independent retailers in over a dozen countries to manage their business and serve their customers more easily. The company is targeting a potential market of 20m cloud point-of-sale terminals worldwide.

AirPOS – Crowdcube Pitch from AirPOS on Vimeo.

I’ve watched this company evolve, I was also their CTO for a while back in the early days. Give yourself some time and look over their pitch, there’s seven days on the clock.