Singularity University New Zealand Summit 2016: Aftermath

 This post begins with an introduction to Singularity University and the recent Summit in New Zealand. If you are familiar with the organisation and/or the event — skip to my thoughts below.

I had the pleasure of attending the inaugural Singularity University Summit in Australasia, which was held in Christchurch, New Zealand, last week. Thought-Wired, a company that I co-founded and have been working on for a few years now, was invited to demonstrate the product we are developing. Our product aims to leverage off brainwave-reading technologies to allow people with severe physical disabilities to communicate and interact with their environment.

During breaks between sessions and before each day, the team and I showed how our software can be used to drive a robotic vehicle (a small one) and do some basic communication activities, such as saying “yes” or “no”, or answering multi-choice questions. In light of the theme of the conference we also attempted to explain to the Summit delegates the potential for exponential impact.

In a nutshell, Singularity University aims to “empower a global community with the mindset, skillset, and network to create an abundant future”, and exponential technologies are at the core of this. It runs programs for individuals or organisations, where futurists and experts share their ideas and advice to engage, inspire and arm people with the tools they need to effect positive change on the world. Much of this leverages off Moore’s Law and the mindset they teach is to go beyond innovation, and instead, aim for disruption.

“Innovation: doing the same things better. Disruption: doing new things that make the old things obsolete.” – SU

There are also programs to support ideation, start-up creation and acceleration, mentorship and enterprise scaling.

We were fortunate enough to be able to catch most of the speakers over the two-and-a-half days of the Summit, and the breadth of topics ranged from cyber security, crypto-currency, and astrophysics to education, autonomous cars and artificial intelligence. The phrase “mind-blowing” was pretty much the mantra of the event (although the official call-to-arms may have been something like “Understand, adapt and thrive”).


Thoughts - not food!

Thoughts – not food!

Even though I wasn’t technically a delegate, it was great to be able to interact with New Zealand’s influencers, innovators and other like minded people, and participate in some of the dialogue on Twitter. I’ve had some time to decompress, gather the remaining pieces of my blown-mind and thought I would share some of my takeaways. I noticed that many of the more vocal delegates are educators and found it interesting that a lot of the discussions I’ve seen have been from an educational perspective. I must need to expand my social network or something (not that it’s a bad thing, since I do think education is absolutely vital).

This is actually a good segue into my first thought. Educators are so important, especially when you consider their role in transferring knowledge and skills to, and supporting our youth. The abundance of educators at the Summit was great, and I have no doubt that much of the youth will indirectly benefit from it. But what of the old youth?

I am oxymoronically referring to people around my age (maybe even younger), who have long since left the education system (including tertiary), and would be unlikely to attend something like the Summit on their own accord. I’m sure many of these people would feel stuck in their jobs or dismayed at the lack of tangible effect they are having or can have on the world. How do we access them, spread the message and catalyse them into action? I feel very privileged to see and hear about what I do — but how do we extend that to a greater audience? My circle of friends would contain an unusually high number of people with PhDs, and even then there aren’t that many who think about disruption or exponential technologies beyond following them on their news feeds.

Furthermore, how do we get this type of thinking to trickle into other social groups, which might not be as highly educated, but still brimming with untapped potential, drive and ability? How do we build momentum?

This leads on to my second thought: what happens now after the Summit? At the end of last week we felt inspired and motivated, ready to change the world overnight. Today though, we’re back at our regular jobs — is the feeling still there? Is it as strong? How do we maintain this momentum we’ve built, and spread it through our networks and community? Is there a risk of us being trapped in an echo chamber where we motivate and congratulate ourselves, but nothing extends beyond our own little bubble? (And we’re beginning to see the effects of these little bubbles).

So many questions and I wish I knew the answers. There is some comfort in the fact that there are others who are thinking the same thing and are attacking it head-on  – for example a hui was organised in Auckland recently, and I’m sure there are similar happenings around the country. What I really want to see from here though, is growth.

My next takeaway kind of feeds on from these ideas of accessibility and inclusion, and is relevant to the actual technological side of things. At the Summit, we were told that 4 billion people are living on less than $5 USD a week. That’s more than half the world’s population. That’s insane.

And for these 4 billion people, are things like self-driving cars going to be useful? Are they going to see much benefit from advances in AI? Are their governments going to be able to afford to buy and maintain surgical robots or advanced biotechnologies? Exponential technologies that only cater to the top 20% or so of the population will not be sustainable, and access to exponential technologies is something that really needs to be considered.

I believe access to technology was brought up by one speaker but it was a theme that was not discussed much. I would be curious to see with the growth of companies who have been identified as disruptive and exponential :  how sustainable their exponential growth is; where do they reach saturation; and at that saturation point, what proportion of the population have they affected? I’ve illustrated what I mean in the figure below.

Turns out, my drawings look like the work of a 5 year old

Turns out, my drawings look like the work of a 5 year old

Exponential growth is great, but obviously the world has a finite number of people on it, and only when uptake reaches 100% has optimal saturation been obtained. A technology that benefits a small portion of the population may experience exponential growth, revenue and profit – but what about optimal impact?

This brings me to my final thought. The phrase “exponential technology” was thrown around a lot at the Summit. This, coupled with the talks about autonomous cars, AI, crypto-currency and so on, would make you naturally associate exponential technology with advanced or high-tech technologies.

I want to remind you that as a society, we’re currently living in the most advanced stage of technology ever, and yet there is a significant proportion of the population who don’t have access to and/or are not benefiting from it. There is still a huge amount of potential for low-tech solutions to have exponential impact, so it’s not necessary to always look for the most advanced solution.

In my opinion, you don’t have to be disruptive to make an impact, and even just a little bit of innovation can still be the difference between something incremental, and something exponential. There are only six degrees of separation between every human being on the planet, and this interconnection between us runs deep. With our compassion, resolve, intelligence, ability to communicate, and will to survive, there’s no reason why we, ourselves, should not be exponential.


Machine Learning: A Social App Example

I was recently invited with Thought-Wired to give a talk at Hobsonville Point Secondary School, an institution leading the way in modern approaches to learning and education in Auckland. The facilities and staff are so mind-bogglingly awesome, I always spend most of my time there lamenting my own educational experience. Case in point, I was there to talk about machine learning, and its relevance to artificial intelligence. This was a subject that I did not learn about until the final year of my Bachelor’s and these students were about to learn about it before having sat NCEA Level 1 [so so jealous].

For my talk, I ran through a real life example and demo of how machine learning works. This post is aimed to supplement that by going through the example in a bit more depth and cover some points I missed in my talk. Hopefully, it should be understandable to high school students, but if there are sections that aren’t clear, or if I’ve fallen into the abyss of academic vernacular, leave a comment so I can fix it up.


I initially had problems trying to come up with an example of how to explain exactly what machine learning is, and furthermore how to demonstrate it engagingly. I found a post that explained it quite well using shopping for mangoes as an example, but didn’t feel this was relevant or interesting to a high school student (check it out though, it does provide a pretty good explanation). Somehow, I arrived at the conclusion that a live demo to show an actual machine being trained using real data was the best way. In hindsight, this could have been a terrible idea, but I believed even if the results didn’t turn out the way I expected, something could still be learned from them.

The use of social media is pretty much ubiquitous in our lives, whether you’re an adult or a high school student. However, are we using the same apps? How easy is it to differentiate a student from an adult based on the social apps they use, and if it is difficult: could a machine be taught to tell whether a person is a high school student or adult based on the social apps they use? This machine could be able to identify underlying patterns in the data that might not be observable to a human.

Data Collection

As I mentioned earlier, I foolishly wanted to use real data, so to create my data set for adults I quickly whipped up a Google form and sent it out through the social apps I use asking for responses. Specifically, I wanted people aged over 15 (criteria for an “adult” in this case), and a list of what social apps they used. I also provided a text field for participants to proffer other apps that I might have missed.

With limited time, I managed to collect 40 responses with an average age of 30.1 (s.d. 8.6) years. The eventual list of apps I arrived at was as follows:

Twitter Facebook Pinterest
Instagram Snapchat Tumblr
LinkedIn Vine Whatsapp
TapTalk Periscope Google+
Kik ooVoo Yik Yak
WeChat Viber Skype
Ello Telegram Beme

While I was setting up for my talk, the students were sent a link to a new form to fill it out. This form consisted of the same list of apps, and no field to suggest new apps. The responses to this form became the student data set, and luckily, there were exactly 40 responses with an average of 14 (s.d. 0.8) years.

The following bar chart shows a summary of everyone’s responses. Note that to us, as humans, we can see that TapTalk is used by neither student nor adult and is therefore useless as a feature. LinkedIn, Periscope, Ello and Telegram are only used by adults (but not all adults) and would be a good indicators of adults. Similarly, Skype seems to be much more popular among the students and could also be a good indicator, but for students.

Summary of Respondent Data

The Machine

The type of machine I used was a binomial logistic regression classifier. This means there are two kinds of output: student or adult (binomial), it uses curve-fitting techniques to learn (logistic regression), and is able to sort samples of data into categories or classes (classifier). The machine will learn from data fed to it by adjusting certain coefficients or properties that affect how much influence particular features have.

A feature is a single property that can be measured, e.g. does a person use Facebook: Yes or No. In this case, the features are the usages of each of the social apps. Combining all the features of a person creates a feature set and the corresponding type of person (adult or student) is the class. Every feature set will have a corresponding class. An example of a feature set would be a person who uses Facebook, Instagram, Snapchat and Skype, and since this person is a student, their class is student.

I used the feature sets of 25 adults and 25 students to train the machine, and the remaining feature sets were used to test the machine’s ability to predict the type of person based on the apps they used. The purpose of testing is to see how well the machine has learned by presenting it with feature sets it has not seen before and seeing how well its predictions or outputs compare to the actual classes.


Training the machine took less than a second and a total of 5 iterations. This meant that it took 5 adjustments of the machines coefficients to get its outputs to match actual classes with acceptable accuracy. In this case, the machine was able to learn to the point where its output was 96% accurate, i.e. from the training data, the machine only got 2 people’s classes wrong.

The real test comes from the new data – the 30 feature sets that the machine has never seen before. For this, the machine was 87% accurate in predicting a student or adult. A confusion matrix can be used to help visualise the results and identify where the machine lost some of its accuracy. The confusion matrix for the data is shown below:Confusion

This is a very useful tool for comparing and analysing machine learning ability. Output Class is what the machine predicted, Target Class is the actual class, and 0 and 1 represents adults and students respectively. The grey squares represent linear totals (e.g. machine outputs of adult were correct 73% of the time, while student target classes were identified 79% of the time) and the blue square is the total accuracy. If we look at the top left green square, here the machine predicted adult and the actual class was adult (the two 0’s). This happened 11 times.

In the red square to the right of it, the target class was 1 or student, but the output class remains as adult. This means the machine got it wrong, and this happened 4 times and is the main contribution to the loss in accuracy. In terms of predicting students (the bottom right green square), the machine did this correctly for all of them, resulting in 100% accuracy as shown in the grey square.

It may take a while to get your head around the matrix, but it’s a very useful took for quickly visualising results.

The other component of the machine we can look at is the actual values of the coefficients that were adjusted within it. How these are applied will be difficult to explain (I barely understand it!) but put simply, the higher their magnitude the more important the corresponding feature is to the machine in determining its output.

In this example, all coefficients were 0 (meaning not used at all), except for four features. These features and their coefficients were:

Facebook 0.0877
LinkedIn -2.1244
Whatsapp -1.3608
Skype 2.3353

What these values mean, is that the machine relied heavily on the LinkedIn and Skype features to tell whether a person was a student or an adult. It also used Whatsapp to a lesser extent and Facebook a little bit. These pretty much support what we thought at the beginning by visually inspecting the data, meaning that in this case a machine is not that much advantageous over a human. These features make the identification quite easy because there is such a big difference in usage when it comes to these apps.

So what happens if we get rid of these features and train the machine without them? As a human, can you tell which features will be important now, and how much?

Taking out LinkedIn, Whatsapp, Skype and TapTalk (since no one uses it anyway) resulted in 66% training accuracy and 60% test accuracy. Woah, that’s a pretty big drop! Here’s the confusion matrix:

And coefficient values:

Twitter -0.6108
Facebook -0.00014
Pinterest 0.3066
Instagram 0.3714
Vine -0.2109

It’s immediately evident from the confusion matrix where the machine is going wrong. It’s getting students wrong more often than right (only right 40% of the time), however it’s still doing pretty well at identifying adults. From the coefficients, we see that there’s no real strong feature any more. Instead, the machine has to look at patterns mostly within Twitter, Pinterest, Instagram and Vine usage to figure it out. Did you see these coming? You might have been able to name some apps, but figuring out their relative importance might have been trickier. In this case, the machine might have the upper hand over a human.


An overall accuracy of 60% is still not too bad because it’s greater than chance. The machine is still performing better than if we were to try figure out the type of person based on a coin toss, which at least demonstrates it has learned something. There are also ways we can look at improving the results of the machine:

  1. Probably the easiest is to add the highly differentiating features like Whatsapp, LinkedIn and Skype back into the mix
  2. We could also collect more data from people to increase the size of the training data set
  3. Expand the number of features available by including more in the survey, such as gaming or fitness apps (however, sometimes too many features can be a problem and the machine can be overwhelmed)
  4. We could improve the learning algorithm
  5. We could use a different learning algorithm

This was a very simple example of how machines can learn from real data, and I was very happy the live demo worked well enough to show some actual learning. I used social apps to try and be relevant but consider this: if we change that around, and look instead at certain websites people visit, or the types of videos they watch, couldn’t we also train machines to deliver specific content or suggest similar websites or videos to check out?

This is the essence of targeted advertising and content. Have you ever gone to a website and found it conveniently showing an ad for something you actually do need? It’s not luck. Something, somewhere, has tracked what you’ve been doing and based on what it knows about similar behaviours has classified you as a likely target for this particular ad.

Scary? Maybe. However, the potential convenience it offers could be worth it. After all, similar ideas are used for auto-complete, predictive text, suggestions on YouTube and similar short-cut behaviours. Machine learning is everywhere, whether it’s obvious to you or not. Have a think about where you might have missed it lurking, and whether something like Judgement Day is a possible threat from machine learning, or is there something else we need to be worried about? 😉

This is Not the Greatest Blog in the World… Nor is it a Tribute

Hello and welcome to my blog. I am Dr J Pau. If you’re seeking medical advice you’re barking up the wrong tree. I wouldn’t know what the philtrum was if it were right under my nose. I took the easy route for getting my title and only had to slog through the process of writing a novel thesis.

My background is in mechatronics engineering, a hodgepodge of mechanical, electronics and software engineering, which culminated in an undergraduate degree from the University of Auckland. This left me with a generalist skillset with no specific field of expertise.

I have since diluted my skills further by pursuing doctoral and post-doctoral studies, both of which included a focus on biomedical engineering. This involved working with biosignals such as electromyography and electroencephalography, which are electrical signals generated from neural activity within muscle tissue and the brain, respectively. Currently, when people ask me what I do, I say I’m a biomechatronics engineer and spend the next 10 minutes breaking down and explaining the components of this excessively complicated portmanteau of professions.

For the past few years I have been an active researcher in my field, while keeping my eye on the development and growth of technology – particularly wearable devices at the consumer level. Now is an incredibly exciting time. The reduced development and prototyping costs, the accessibility to crowd-funding, and the commercialisation of research technology, have all contributed to what I like to think of as a flood of new-wave, potentially game-changing technologies that could significantly affect the way we live our day-to-day lives. Technologies (I don’t quite feel like calling them “products” yet) like the Oculus Rift, Meta, Totem, Leap, Myo, and more, are being released into the market and the developer community is playing a substantial role in determining their applications.

The main purpose of this blog is to provide my thoughts, reviews and informative commentary on these devices and technologies (when I can afford them). I offer my perspective, from a research and technology-enthusiast point of view. These are of course my opinions, so even if they are complete rubbish but spark some discussion or pondering, it’s a win in my book. I also firmly believe that technology is a tool that is best applied when solving problems. A great number of problems could be solved by taking existing technologies and applying them in different or non-conventional industries – the problem is in the lack of awareness, initiative and communication channels, but that’s another issue entirely.

The gaming industry is a good example. Any technology that enhances the gaming experience is likely able to be applied across multiple industries. Think like Microsoft’s Kinect. The multi-billion dollar gaming industry is a huge motivator for technology development, and the relative accessibility of new technologies (through APIs and SDKs) can make it relatively easy to provide a more positive impact from a societal or communal perspective.

Finally, I also believe that (technological) knowledge is still surprisingly inaccessible. While seemingly counter-intuitive, even though the internet has vastly improved access to information, it has also amplified the noise. I feel that physically being in New Zealand still has us isolated, even in the digital space, and attempting to distil through the abundance of information can be overwhelming. Maybe this blog can help with that process or at least raise awareness in a different capacity. One of my next posts will cover existing tech blogs in NZ and hopefully find you something to follow, if this blog isn’t quite what you’re looking for. No offence taken 🙂

P.S. In case you’re wondering, the title of this post is a reference to a song by Tenacious D. I occasionally do things like this.