“Women in Tech” by @tarah, possibly the best text on #working in #tech.

Gushing about books is not something I do that often, well possibly apart from my own, but I’ve been gushing about Women In Tech by Tarah Wheeler Van Vlack. Everything I should have known about the industry, even 30 years ago, is in this book.

“Readers will learn: the best format for tech resumes; how to ace a tech interview; the secrets of salary negotiation; the perks of both contracting (W-9) and salaried full-time work; the secrets of mentorship; and how to start your own company.”

I bought it for someone else in the house, they very nearly didn’t get it. I think I need to by another copy.

 

 

Sometimes your best #software #engineering decisions are the quick ones.

Imagine the scene

The walk to gate 81 at Stansted Airport is a long one, over the walk way. Those that know me I’m not great on my feet as I once was. So while concentrating on putting one foot in front of the other one of my shoes totally gave out.

(Yes that’s a picture of a squirrel and that’s a book on yield management calculations, go figure). 

So what to do?

Well there are shops selling gaffer tape, sellotape, string, superglue. I’m not for turning around and walking back as I’m halfway to the gate. The noise of this clod hopping thing is very loud that I’m getting odd looks…..

Time for quick solution thinking. What have I got to hand and what can I possibly to with it? Literally in a split second the question, “How do I keep this sole up with the rest of the shoe?”. I could only think of one solution. Take the lace out and re-tie the lace under the shoe to hold it together, kind of.

It worked, it was a bit loose but it kept together so I could walk and not be the noisy centre of attention.

There are days startups and software are the same, backed into a corner with little on hand to help you and you just have to make a decision to get things working. It happens and sharpening those skills is a good thing to have.

The shoes are no more but the memory in a blog post lives on.

Thinking about segment latency in streaming apps. #Kafka #Kinesis #Onyx

It’s fair to say that the route of building an app and getting it out there is kind of gone. The old adage of “ideas are plentiful but it’s all about the execution” are valid. The execution aspect can be taken another level though, assume you have a competitor and it’s going to decided on one metric, time.

Time of Execution is the difference between making the sale or the customer going somewhere else.

The more streaming applications I work with the more fanatical I’ve become about reducing latency. With one basic question:

How can I make this end-to-end application faster? 

Say, for example, we receive messages via HTTP (a web endpoint) via REST or what have you and that’s sent to some form of persistence at the other side. There’s a start point, some actions and an end goal. I can write this down in a diagram easy enough….

  • Client sends message to endpoint address.
  • Producer sends message to topic.
  • Consumer processes the message.
  • It’s persisted (that could be to a S3 bucket or a database for example).

From left to right everything has a time implication. I’ve put some basic millisecond times in my diagram, merely guesses.

  • Client sends message to endpoint address. (200ms)
  • Producer sends message to topic. (10ms)
  • Consumer processes the message. (1500ms)
  • It’s persisted (that could be to a S3 bucket or a database for example). (2500ms)

So from start to end we’re estimating the time to complete the entire process is 4210ms or just over four seconds. Now this is probably a worst case scenario number, things may work in our favour and the times are much faster. Log it and make a record of it, review the min/max/average times.

Perhaps if I put it in other terms, assume you lose $5 revenue for every lost transaction and you’re losing 20 a day due to the speed of response. That’s $35,600 a year…. some things will just not do.

What are the things I can change?

So what are the things that can be changed? Well in the instance above not a lot. If this was a Kafka setup I’d be using the Confluent REST API interface for my endpoint so nothing can be done there.

Topic management is done by Kafka itself, there are a few things we can do here in the tuning. Such as throwing as much RAM on the box as possible, reducing any form or disk write I/O (disk writes will slow things down).

The consumer is the one thing we have an amount of control over as there’s a good chance it’s been coded up by someone, regardless of whether it’s in house or outsourced. Know thy code, test the routine in isolation. Nice thing about functional languages like Clojure is I can test functions outside of frameworks if they are coded right. It’s data in and data out.

Persistence is an interesting one. It’s rarely in your control. If it’s a database system then, like the Kafka settings, you might have some leverage on the settings but that’s about it. When you get into cloud based storage like Amazon S3 then you are at the mercy of connection, bandwidth and the service at the end. You have very little control of what happens. The end point is usually where the most amount of latency will occur.

With AWS Kinesis you don’t have that kind of luxury, it’s pretty much set for you and that’s that. You can increase the shard count and expose more consumers in your 5 consumers per shard/per second/per 1mb of data but you’re scaling up and costing more in the long run. If you want total millisecond control, then it’s going to be Kafka I’d be going for.

That Was Too Simple…

Consumers are expected to do things from the simple like passing through a message into storage, like the previous example, and some things can be more complex. Perhaps cleaning a transformation in the consumer and an API call to a third party API to get an id from some of the message data. More milliseconds to think about and some interesting thoughts on the process.

Here’s my new process model.

We can assume the HTTP REST to topic remains the same, the consumer is doing the work here.

  • Receive the message from the topic.
  • Consumer starts processing.
  • Do transformation 1, this might add a unique uuid and time to show our processing block.
  • In parallel an API to get ids from a third party.
  • Combine the results from the transform and getids into the outgoing map.
  • Persist the results as previously done.

When functions split out (as they can do in the Onyx framework for example) the maximum latency within the graph workflow is going to be the node that takes the longest time to complete. In our case here it’s the getids function.

Let’s put some timings down and see things look.

I’ve amended the consumer, that’s where the changes have happened. So there’s 750ms from the inbound message being picked up, 200ms for the transform and 2500ms for the third party API call. Our 4210ms process now becomes 200 + 10 + 750 + 2500 + 2500 = 5960ms.

We can tune one thing, the transform function, but there’s not a lot we can do with the third party API call. Once again, like the persistence, this one is really out of our control as there’s bandwidth, connection and someone else’s server to take into account.

The Kafka Log Will Just Catch It All

Yes it will and you hope that the consumers will catch up while there’s some gaps in the message delivery coming in. What I’d like all in all is to know those consumers are processing as fast as they can.

It’s a fine mix of message size, message time to live (TTL) and consumer throughput. I’ve seen first hand consumers fail on a Friday and every broker has filled up and died by the Monday. (no 24/7 support, so not my problem) 🙂

Enough disk space to hand the worse case scenarios is vital. Speaking of which….

Expect the Worse Case Scenario

When using third party services, including AWS or any other cloud provider. Work on the rule that theses services do go down. What’s the backup plan? Do you requeue the message and go around again? What should happen to failed messages mid consumer, return to the queue or dump to a failed message topic?

Whenever there’s a third party involved you have to plan for the worst. Even services that I’ve written myself or have been involved in I still ask the question, “what happens if this goes down?”, and while the team might look at me with ten heads (remotely) because it’s never gone down…. well it doesn’t mean it never will. And the day it does my consumers won’t really know about it, they’ll just fail repeatedly.

You Can Control Your Latency

There are things you can control and things you can’t. From the outset think about the things you can control, think about how to measure each step in the process, is there a bottleneck, is it code that could be refactored? Does the docker container increase any latency over the network, what about your Marathon/Mesos deploys. The list is long and not everything needs dealing with, some things will be more critical than others.

One thing I’ve learned is every streaming application is different. You can learn from your previous gains and losses but it’s still a case of starting on paper and seeing what can be shaved off where and what’s not in your control.

Ultimately it’s all fun and it’s all learning.

 

With no in house dev team your startup is dead. #startups #tech #development #software

The rise of the devless startup has been around for a while, echoes of “we can outsource it and save $n” are always bandied around at pitches, meetings or the casual coffee with other founders.

Here’s my stark warning: no in house dev team, your startup will die far quicker than the rest.  If you are a founder who can code full systems, front and back end, apps and API’s then you are in a very strong position.

The Changing Landscape

The last couple of years has seen a sharp increase of new startups. According to the Kauffman Startup Index report the figure stands currently at 310 new entrepreneurs per 100,000 population. That doesn’t sound like a lot but that equates to 500,000 new treps each month.  If only I knew that the churn rate was.

What hasn’t changed much is the workflow in some areas of idea creation to funding. It’s still the same trodden path with the same numbers, “VC’s play the 1 in 10”, one gold star while the other nine either break even. Perhaps that number needs addressing a little more realistically, 1 in 100 for example?

And when there’s a huge amount of money being pumped in to ideas that aren’t going to turn into much, then expect the correction to happen. Stock markets are emotionally driven, see a dip and lots will follow, including investment money.

You need an edge, you need an advantage and you need to ruthlessly (but legally) exploit it for profit.

No in house dev team, no chance….

With the number of startups in existence it’s been a great time to be a software developer with 3+ years experience. You could hop from place to place and take a decent salary, notice I didn’t say anything about stock options. Guess what, probably worthless unless you were first hire or tech co founder.

If you don’t have an in house developer team, or even one lead developer, then make that your priority to find one. You need not care where they are in the world or if they are remote, just find a good one and pay them. Contractors are fine but only once you have dev number one in place.

Over the last couple of days I’ve been watching news reports in to various fields of commerce and service. We can measure sentiment, extract topics and tags from vast troves of data. Startups with no in house development team cannot react in the short time frames that are now expected.

Imagine the scenario, a news report starts doing the rounds at 8:30am with a graph showing a huge decline in an area your startup is working on. There’s an edge staring you in the face. If you email/phone your outsourced development company you’ve already lost to a competitor (and if you don’t think you have a competitor there’s another reality check to do), “we’ll be able to do that on Friday for you”.

What you need is: the war room, how do you exploit this need?, find a solution, do it, have the press release done by 11am to everyone you know via every channel you can find. If you can’t do that then there’s a good chance you’re missing 90% of the opportunities passing you by, even worse the competition will be exploiting them against you.

You basically need to be a mix of Pablo Escobar and Bobby Axelrod, but legal.

Monitor every piece of data in your field.

Monitor everything you can get your hands on, every social channel, news feed, RSS feed, basically anything. Automate it, pick out specific phrases, use sentiment analysis to see what the audience and markets are thinking.

While the huge talk up on Big Data and the opportunities (well most of them vanished in 2012) still goes on, it’s the narrow viewed data that you really need. The ones that are going to give you the alpha, the one piece of data that gives you the edge over the competition. It’s the deep dive of what you know to pull out that one pearl that will turn it around, when you find it you still need the developer team to implement as quick as they can. Not as quick as a third party can, they will never share your concerns, fire or need.

It Does Not Take A Ton of #AI or #MachineLearning to make #customers delighted. #delight101

Image Credit: TibetandTaylor via Creative Commons

Reality Check, Your Customers Probably Don’t Care About AI

Can we just be honest a second, most AI or Machine Learning tools are going to be overkill for most small business. The amount of marketing saying that it’s going to increase ROI and all that, well it’s starting to sound as snake oil as the social media snake oil of 2011.

The only time customers care about AI is when it’s being used against them, for your gain. Back to the creepy line again….

It’s not about VR goggles to view your products (two words: Second Life). It’s not about virtual currency, it’s definitely not about blockchain. It’s not about Hadoop, or Spark, or Kafka or TensorFlow. It’s only about connection and service. Delight your customer and the tools don’t really matter.

Delight 101

It’s easy really, if you are a retailer or ecommerce company, even better. Know your customer’s birthday. Simple.

Step 1 – Once a week, do the birthday run.

Go to your CRM or customer database every Monday morning, first thing, before start of business and find out which customers have their birthday that week. Make a list, save it to Excel…. whatever.

Step 2 – Send them something

Give them a voucher, make sure the barcode or QR code (yeah I know, I said it) is unique to the birthday boy or girl. A 10% off ain’t gonna cut the deal, it’s gotta be real good, a free product, free consulting for an hour or anything else you feel like.

Delivery? Twitter message, Instagram, Snapchat, Facebook, Linkedin, postal service, SMS message? On a cake with those little silver balls….

As long as you can trace back who came back delighted. In fact, you don’t even need the barcode. They’ll be delighted.

Step 3 – Do Nothing else

That’s it. Just do it out of good will, nothing else. I’ve been watching too many Gary Vaynerchuk videos, it’s starting to show, I think that’s a good thing.

The Times You Do Need AI, ML and BigData?

When you have a million customers, forty million transactions and tens of thousands of unique products, that’s when you need those tools (or if you’re making self driving cars, then you definitely need AI/ML). For the rest of us, we can get away with simple databases and CRM’s for the time being.

It’s not difficult, you just have to commit doing it every week.

 

My friends at Airpos are funding raising on Crowdcube. #pointofsale @airpos #startups #crowdcube

A scalable, secure and mature software-as-a-service platform, AirPOS enables 100’s of independent retailers in over a dozen countries to manage their business and serve their customers more easily. The company is targeting a potential market of 20m cloud point-of-sale terminals worldwide.

AirPOS – Crowdcube Pitch from AirPOS on Vimeo.

I’ve watched this company evolve, I was also their CTO for a while back in the early days. Give yourself some time and look over their pitch, there’s seven days on the clock.

https://www.crowdcube.com/companies/airpos-ltd/pitches/ZpDjoZ

Time to Remind Myself What a #Startup Is.

From Wikipedia:

A startup company (startup or start-up) is an entrepreneurial venture which is typically a newly emerged, fast-growing business that aims to meet a marketplace need by developing or offering an innovative product, process or service. A startup is usually a company such as a small business, a partnership or an organization designed to rapidly develop a scalable business model.

Sorry, I just had to remind myself. It’s not about getting on accelerators, crowdfunding, flattering venture capitalists, faux mentors or any of that. It’s about having an idea, building it out and selling it to paying customers. If you need help then that’s fine, the help is out there, just remember that those services come at a cost whether that’s in sanity or time, probably both.

Government money is fine, as long as it’s not your single source of income, if that’s the case then take a good look at your idea, it’s probably on life support already.  It’s also a shifting sand, it could stop at any moment.

Where possible I’m still in favour of just getting on with it by yourself. As put nicely by a hedge fund founder in Vanity Fair a number of years ago, “VC funding is one step up from human trafficking….”

Better to think like Mike ‘Wags’ Wagner.

“You know I want to be on the outside, rocking with the marauders.”

 

Revisiting #Spark Scripts From the Command Line. #bigdata #spark #scala

It’s been a while since I looked at any Spark code, I’ve just been working on other things. There’s been a few comments on the blog about running Spark jobs from the command line shell.

Test Data

First let’s have some text data to work off. We’ll do a basic word count on it. Nothing to hand apart from my Tensor Flow algorithmic book generation.

I Wordlessly Kate and I gaze at the elevator at the end. I have never understood what you’re going to do with my safety. I groan as my body is rigid, tension radi- ating out me in front of me. He looks so remorseful, and in the same color as the crowd arrives and in my apartment. The thought is crippling. But and I don’t want to go to me that I want to be beholden to you. I don’t want to be beholden to you. I don’t want to be beholden to you. I don’t want to be beholden to you. I don’t want to be beholden to you. I don’t want to be beholden to you. I don’t want to be beholden to you. I don’t want to be beholden to you. I don’t want to be beholden to you. I don’t want to be beholden to you. I don’t want to be beholden to you. I don’t want to be beholden to you. I don’t want to be beholden to you. I don’t want to be beholden to you. I don’t want to be beholden to you. I don’t want to be beholden to you. I don’t want to be beholden to you. I don’t want to be beholden to you. I don’t want to be beholden to you. I don’t want to be beholden to you. I don’t want to be beholden to you. I don’t want to be beholden to you. I don’t want to be beholden to you. I don’t want to be beholden to him — and I can tell him about 17 miles a deal. “Did you have to compro- mise. I giggle. “Wench. Food, now, please.” “Since you want to talk about you in my own way, and I am going to be very surprised, not to see you. Ax (Your fiancee) I ask softly. He looks so vulnerable — and I don’t know if it’s my heightened way of the ‘old,’ son. I have a hairdresser arriving at your mom?” “Yes.” He grins at me and winks, making me flush. He smirks at me. “What is it?” I ask. He gazes at me, his eyes dark and earnest. “Find out the elevators, of the first time in a half-bear — and I have to go to church . . . Date: June 10, 2011 16:05 To: Christian Grey Twiddling Christian and I don’t know if it’s not at the rules are a hostile Anthem, “Every Breath You Take.” I do you have to do with you?” he asks. “I don’t want to go to work for a living, and I’ll be very persuasive,” he murmurs, and his eyes are alight with humor. “He’s like a drink,” Jack mut- ters, locking the eggs. I crack through my body. But what I do to make you uncomfortable.” I shake my head to fetch him at the same COURTESY to a child. “I thought you were in the apartment or you^?

It’s not a classic I know.

The Scala Spark Script

Next a Scala script that does the word count in Spark.

val text = sc.textFile("/Users/jasonbell/sample.txt")
val counts = text.flatMap(line => line.split(" ")).map(word => (word,1)).reduceByKey(_+_)
counts.collect

Basic…. but it works.

And A Run Through

And then run it from the command line.

$ /usr/local/spark-2.1.0-bin-hadoop2.3/bin/spark-shell -i wc.scala
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
17/04/08 09:07:14 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/04/08 09:07:20 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
Spark context Web UI available at http://192.168.1.65:4040
Spark context available as 'sc' (master = local[*], app id = local-1491638836119).
Spark session available as 'spark'.
Loading wc.scala...
text: org.apache.spark.rdd.RDD[String] = /Users/jasonbell/sample.txt MapPartitionsRDD[1] at textFile at <console>:24
counts: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[4] at reduceByKey at <console>:26
res0: Array[(String, Int)] = Array((COURTESY,1), (“Since,1), (flush.,1), (is,3), (now,,1), (2011,1), (arrives,1), (same,2), (June,1), (am,1), (have,5), (never,1), (tension,1), (winks,,1), (dark,1), (miles,1), (with,3), (fiancee),1), (crippling.,1), (first,1), (—,3), (fetch,1), (talk,1), (uncomfortable.”,1), (eyes,2), (crack,1), (my,7), (Take.”,1), (child.,1), (go,3), (make,1), (Breath,1), (what,2), (out,2), (Twiddling,1), (me,,1), (gazes,1), (looks,2), (Date:,1), (deal.,1), (remorseful,,1), (me,4), (him,3), (his,2), (are,2), (body,1), (shake,1), (persuasive,”,1), (“Yes.”,1), (can,1), (half-bear,1), (mise.,1), (Wordlessly,1), (“What,1), (elevator,1), (Food,,1), (.,3), (earnest.,1), (as,2), (going,2), (‘old,’,1), (very,2), (don’t,27), (you,1), (son.,1), (safety.,1), (eggs.,1), (apartment...
Welcome to
 ____ __
 / __/__ ___ _____/ /__
 _\ \/ _ \/ _ `/ __/ '_/
 /___/ .__/\_,_/_/ /_/\_\ version 2.1.0
 /_/

Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_60)
Type in expressions to have them evaluated.
Type :help for more information.

If you’re not getting the results then something is wrong.

 

 

 

The Northern Ireland #AI Startup Problem – #AI #NorthernIreland #Startups

(The post here reflects my own thoughts and may not be the thoughts of my employer, just putting that out there now to avoid any confusion)

The Shift

Over the last few months there’s been a shift. A movement from web sites and apps that do stuff (mostly useful, some utterly useless) to a move refined thinking on process and insight.

During the weekend I was look at the funding patterns of artificial intelligence startups. Handily KDNuggets (the place you look for anything on data mining and machine intelligence) had a piece on 50 of the “top” companies right now in AI.

The 50 to Watch

Company Sector Investment ($m)
InsideSales.com (Provo UT) Ad Sales 251.2
Persado (New York NY) Ad Sales 66
APPIER (Taipei Taiwan) Ad Sales 49
DrawBridge (San Mateo CA) Ad Sales 46
Zoox (Menlo Park CA) Autotech 290
Nauto Inc. (Palo Alto CA) Autotech 14.9
nuTonomy (Cambridge MA) Autotech 19.6
Dataminr (New York NY) BI 183.44
Trifacta (San Francisco CA) BI 76.3
Paxata (Redwood City CA) BI 60.99
DataRobot (Boston MA) BI 57.42
Context Relevant (Seattle WA) BI 44.3
Tamr (Cambridge MA) BI 41.2
CrowdFlower Inc. (San Francisco CA) BI 38
RapidMiner (Boston MA) BI 36
Logz.io (Tel Aviv Israel) BI 23.9
BloomReach (Mountain View CA) Commerce 97
Mobvoi Inc. (Beijing China) Conversation AI 71.62
x.ai (New York NY) Conversation AI 34.3
MindMeld (San Francisco CA) Conversation AI 15.4
Sentient Technologies (San Francisco CA) Core AI 135.78
Voyager Labs (Israel) Core AI 100
Ayasdi (Menlo Park CA) Core AI 106.35
Digital Reasoning (Franklin TN) Core AI 73.96
Vicarious (San Francisco CA) Core AI 72
Affectva (Waltham MA) Core AI 33.72
H20.ai (Mountain View CA) Core AI 33.6
CognitiveScale (Austin TX) Core AI 25
Numenta (Redwood City CA) Core AI 24
Cylance (Irvine CA) Cyber Sec 177
Darktrace (London UK) Cyber Sec 104.5
Sift science (San Francisco CA) Cyber Sec 53.6
Kensho (Cambridge MA) Fintech 67
Alphasense (San Francisco CA) Fintech 35
iCarbonX (Shenzhen China) Healthcare 199.48
Benevolent.AI (London UK) Healthcare 100
Babylon health (London UK) Healthcare 25
Zebra medical vision (Shefayim HaMerkaz Israel) Healthcare 20
Anki (San Francisco CA) IOT 157.5
Ubtech (Shenzhen China) IOT 120
Rokid (Hangzhou Zhejiang China) IOT 50
Sight Machine (San Francisco CA) IOT 44.15
Verdigris tech. (Moffett Field CA) IOT 16.1
Narrative science (Chicago IL) Text Analysis 29.4
Captricity (Oakland CA) Vision 51.9
Clarifai (New York NY) Vision 40
Orbital Insight Inc. (Mountain View CA) Vision 28.7
Chronocam (Paris France) Vision 18.35
Zymergen (Emeryville CA) Other 174.1
Blue river tech (Sunnyvale CA) Other 30.4

Key Summary

  • Minimum Investment – $14.9m
  • Maximum Investment – $290m
  • Average Investment – $73.26m
  • Number of companies listed – 50

The listed companies were “ones to watch”, that doesn’t take into account the other 10,000 or so that will be in stealth, not on anyone’s radar or just making sales and getting on with it.

For me one concern is the lower investment limit, $14.9m, I’ve not seen any NI company raise that amount of investment. And I’ve spent time thinking about why that could possibly be.

  • All the startups are donkey’s. They’re just not worth that amount.
  • All the founders are playing the Northern Ireland funding game, raised their little $1m and can’t raise as they’ve already lost 20-25% of the company.
  • There’s no actual IP or product.
  • There’s no customers.
  • There’s no problem being solved.

That’s off the top of my head, if I really went all mind palace on it I’d probably come up with another ten reasons.

The Talent Pool

The much lauded reason for FDI companies setting up shop in Belfast and, occasionally, Derry.

“you have graduates – there’s a lot of talent in Belfast” From the BT, here.

Which I read as, “There’s plenty of cheap graduates looking for a job in Belfast, we can exploit that and reduce our bottom line.”

It’s time to seriously question this marketing message, yes there are some very talented graduates in Northern Ireland. Are they ready for the market where they are needed? Debatable. Do they fill the gap of what’s really missing, no they don’t.

It still skirts around the issue for any startup, a complete lack of good CTO talent. What I’m seeing more and more of are companies setting up, getting that free government money (startup DLA, if you will) and handing out vanity titles like there’s no tomorrow. I’ve written and spoken about this many times before, if you want to read it again then have a look at this.

Good CTO’s in NI are hard to find, plain and simple. The reason for this is simple too, they’re pretty much in great jobs with large employers with a deal too good to lose and don’t think it was a fluke, the large companies engineer it that way, they obviously don’t want to lose good talent when they see it.

Jumping to a startup with a very questionable runway is a huge risk. Look at yourself in the mirror and ask yourself, “Am I worth the risk to my employees, my C levels and most importantly my customers?”.

If you flinch or can’t do it then you obviously need a session with Wendy Rhodes.

NI Needs a BIG WIN

If you think you’re on the starting wave of AI technology then you’re already five years too late. The same mistake was made with BigData opportunities. What I personally believe is required right now is for someone to bring a product along that is so unique and solves a problem better than anyone else that the rest of the world can’t do anything but look.

This thing also needs to IPO big time and make the founders and early stage investors so rich that people look at Northern Ireland as the place. The time is now to stop kidding ourselves and thinking we’re at the start of a wave, you’re already behind. Still thinking that social media data is going to make you (and others) rich, I doubt it, that edge is long gone.

There’s little point building tools, it’s hard to create revenue with programming tools and API’s, solve a problem better than anyone else so it can’t be ignored. The tools to do AI and Machine Learning are plentiful, whether it be TensorFlow, Weka or what have you. Search hard enough here and you’ll find posts on those technologies. At the end of the day the programming side isn’t that difficult when you have good coders who understand the logic.

I firmly believe it can be done, I just think the thinking needs to change, stop listening to salary paid government PR (use them, fine, but weigh up what’s being said) and focus on idea, IP and customer.

  • Kick ass product
  • Kick ass team
  • More than $7m in investment
  • An edge that no one can ignore.
  • Main focus to remain in NI and IPO.

Your focus needs to be three standard deviations to the mean, that’s where the risk and the potential rewards are.

And keep this in mind, AI is not about replacing jobs, it’s about focusing on the job creation and creating new jobs that currently don’t exist. It’s an exciting time to be here but NI but you have some serious catch up to do.

Beltech 2017

I’m on the panel at Beltech 2017, “Public Debate: The Impact of AI on our World” at 6pm though I’ll be there most of the day on behalf of Mastodon C. So feel free to catch up with me there.

 

[#Kafka Diaries] – Your daily morning streaming meditation guide.

Kafka acting up like a toddler is a symptom and not the cause, you’re doing something to it for it to act that way.

 

Strive to reduce latency. Remember the Rule of 72.

 

Use frameworks but remember there’s added latency so make sure you can tune it.

 

Log everything.

 

Time everything.

 

Know your message size and define the broker memory on “write throughput * 30” seconds accordingly.

Know your log retention size and time, if you don’t know it then it’s probably 156 hours.

 

Ensure your consumers can neatly die if they need to without wrecking the other consumers.

 

That niggle inside saying something ain’t right, heed it.

 

If another company can process thirty million messages a second, you can too.

Bonus: Tea Solves Everything