Singularity University New Zealand Summit 2016: Aftermath

 This post begins with an introduction to Singularity University and the recent Summit in New Zealand. If you are familiar with the organisation and/or the event — skip to my thoughts below.

I had the pleasure of attending the inaugural Singularity University Summit in Australasia, which was held in Christchurch, New Zealand, last week. Thought-Wired, a company that I co-founded and have been working on for a few years now, was invited to demonstrate the product we are developing. Our product aims to leverage off brainwave-reading technologies to allow people with severe physical disabilities to communicate and interact with their environment.

During breaks between sessions and before each day, the team and I showed how our software can be used to drive a robotic vehicle (a small one) and do some basic communication activities, such as saying “yes” or “no”, or answering multi-choice questions. In light of the theme of the conference we also attempted to explain to the Summit delegates the potential for exponential impact.

In a nutshell, Singularity University aims to “empower a global community with the mindset, skillset, and network to create an abundant future”, and exponential technologies are at the core of this. It runs programs for individuals or organisations, where futurists and experts share their ideas and advice to engage, inspire and arm people with the tools they need to effect positive change on the world. Much of this leverages off Moore’s Law and the mindset they teach is to go beyond innovation, and instead, aim for disruption.

“Innovation: doing the same things better. Disruption: doing new things that make the old things obsolete.” – SU

There are also programs to support ideation, start-up creation and acceleration, mentorship and enterprise scaling.

We were fortunate enough to be able to catch most of the speakers over the two-and-a-half days of the Summit, and the breadth of topics ranged from cyber security, crypto-currency, and astrophysics to education, autonomous cars and artificial intelligence. The phrase “mind-blowing” was pretty much the mantra of the event (although the official call-to-arms may have been something like “Understand, adapt and thrive”).


Takeaways

Thoughts - not food!

Thoughts – not food!

Even though I wasn’t technically a delegate, it was great to be able to interact with New Zealand’s influencers, innovators and other like minded people, and participate in some of the dialogue on Twitter. I’ve had some time to decompress, gather the remaining pieces of my blown-mind and thought I would share some of my takeaways. I noticed that many of the more vocal delegates are educators and found it interesting that a lot of the discussions I’ve seen have been from an educational perspective. I must need to expand my social network or something (not that it’s a bad thing, since I do think education is absolutely vital).

This is actually a good segue into my first thought. Educators are so important, especially when you consider their role in transferring knowledge and skills to, and supporting our youth. The abundance of educators at the Summit was great, and I have no doubt that much of the youth will indirectly benefit from it. But what of the old youth?

I am oxymoronically referring to people around my age (maybe even younger), who have long since left the education system (including tertiary), and would be unlikely to attend something like the Summit on their own accord. I’m sure many of these people would feel stuck in their jobs or dismayed at the lack of tangible effect they are having or can have on the world. How do we access them, spread the message and catalyse them into action? I feel very privileged to see and hear about what I do — but how do we extend that to a greater audience? My circle of friends would contain an unusually high number of people with PhDs, and even then there aren’t that many who think about disruption or exponential technologies beyond following them on their news feeds.

Furthermore, how do we get this type of thinking to trickle into other social groups, which might not be as highly educated, but still brimming with untapped potential, drive and ability? How do we build momentum?

This leads on to my second thought: what happens now after the Summit? At the end of last week we felt inspired and motivated, ready to change the world overnight. Today though, we’re back at our regular jobs — is the feeling still there? Is it as strong? How do we maintain this momentum we’ve built, and spread it through our networks and community? Is there a risk of us being trapped in an echo chamber where we motivate and congratulate ourselves, but nothing extends beyond our own little bubble? (And we’re beginning to see the effects of these little bubbles).

So many questions and I wish I knew the answers. There is some comfort in the fact that there are others who are thinking the same thing and are attacking it head-on  – for example a hui was organised in Auckland recently, and I’m sure there are similar happenings around the country. What I really want to see from here though, is growth.

My next takeaway kind of feeds on from these ideas of accessibility and inclusion, and is relevant to the actual technological side of things. At the Summit, we were told that 4 billion people are living on less than $5 USD a week. That’s more than half the world’s population. That’s insane.

And for these 4 billion people, are things like self-driving cars going to be useful? Are they going to see much benefit from advances in AI? Are their governments going to be able to afford to buy and maintain surgical robots or advanced biotechnologies? Exponential technologies that only cater to the top 20% or so of the population will not be sustainable, and access to exponential technologies is something that really needs to be considered.

I believe access to technology was brought up by one speaker but it was a theme that was not discussed much. I would be curious to see with the growth of companies who have been identified as disruptive and exponential :  how sustainable their exponential growth is; where do they reach saturation; and at that saturation point, what proportion of the population have they affected? I’ve illustrated what I mean in the figure below.

Turns out, my drawings look like the work of a 5 year old

Turns out, my drawings look like the work of a 5 year old

Exponential growth is great, but obviously the world has a finite number of people on it, and only when uptake reaches 100% has optimal saturation been obtained. A technology that benefits a small portion of the population may experience exponential growth, revenue and profit – but what about optimal impact?

This brings me to my final thought. The phrase “exponential technology” was thrown around a lot at the Summit. This, coupled with the talks about autonomous cars, AI, crypto-currency and so on, would make you naturally associate exponential technology with advanced or high-tech technologies.

I want to remind you that as a society, we’re currently living in the most advanced stage of technology ever, and yet there is a significant proportion of the population who don’t have access to and/or are not benefiting from it. There is still a huge amount of potential for low-tech solutions to have exponential impact, so it’s not necessary to always look for the most advanced solution.

In my opinion, you don’t have to be disruptive to make an impact, and even just a little bit of innovation can still be the difference between something incremental, and something exponential. There are only six degrees of separation between every human being on the planet, and this interconnection between us runs deep. With our compassion, resolve, intelligence, ability to communicate, and will to survive, there’s no reason why we, ourselves, should not be exponential.

The Dash Front and Back

The Dash (by Bragi) Review

The Dash is made by a German company, Bragi, and was funded by a Kickstarter campaign with almost 16,000 backers contributing 3.4 million USD. According to Bragi’s marketing department, these are the world’s first wireless smart earphones. They’re quite expensive and are priced at 299 USD, excluding shipping and GST. For all the details on features and specifications, check out the product page. Bear in mind that a long list of features does not equate to a “smart” device.

My primary purpose for The Dash would be to use it for running, and it does seem to be targeted towards more active users. I have tried numerous ear/head phones and none have met my expectations. Yurbuds Inspire worked well for a few runs, but then the silicon ear pieces got dirty and wouldn’t clean well. This caused them to lose most of their hold and they’d slip out of my ears and pop off the earphones. I also could never get the right fit (no matter how many combinations I tried) with the wireless Jaybird Bluebuds X. These couldn’t handle running, and a combination of the natural bouncing and sweat would cause them to eventually become dislodged.

Yurbuds and Jaybird

Left: Yurbuds with their twist-lock silicon ear pieces (Source); Right: Jaybird’s outer ear clip  (Source)

The constant need to readjust the earpieces of both products made running with them annoying; especially if I was out for an hour or so. It got to the point where I eventually stopped running with music.

The Good

Music and Playlists

The fact that there is 4 GB of storage and the option of storing up to 4 different playlists means  there is some decent flexibility in terms of what music you can queue up. Audio quality itself is fine, and the ability to switch between up-tempo or more motivational playlists is a useful feature, because you can adjust the style of music to suit your run type.

Fit

To avoid getting sweat into your ears and causing earphones to slip out, there needs to be a pretty much perfect seal around the earphone in the ear. As soon as the seal is compromised, you get earphone slippage and battling to keep them in then becomes the main focus of a run. The Dash fit me well. In order to get a good seal I had to manipulate my ear and try open up the ear canal a bit more so I could shove each ear piece in tightly.

seal

The best kind of seal (Source)

Once in, The Dash stayed in place, and lasted me up to an hour (the length of my longest run) without falling out or slipping. During this time, a lot of sweat was generated and I’m quite sure The Dash would have been fine for longer runs. In my book, this is a huge plus. Earphones that actually stay in with no adjustments required. The seal feels tighter than that of the Bluebuds and I found the security of the twisting mechanism similar to the Yurbuds.

If a secure fit was the only thing I was looking for, The Dash would be perfect for me. However, I’m a bit fussier than that and unfortunately, there is quite a long list of reasons as to why I won’t be getting my own set any time soon.

The Bad

Fit

Once you get a good fit, The Dash sits in the ears wonderfully. However, it is absolutely crucial that you get the perfect fit right off the bat. I found it quite difficult to tell until I actually started running, by which time it was too late. If any sweat makes it through the seal, no amount of ear-pulling or Dash shoving will ever reinstate it.

img_0160._6-pcs-do-not-accept-if-seal-is-broken-packing-white-tapes-2-inch-x65-met-

My mantra for earphones (Source)

I also wore The Dash casually (not running) and found that after half an hour, my ears were getting a little tender and I had to take them out. This may have been caused by the steps I had to take to get a good seal, but this discomfort wasn’t experienced while running.

Controls

The Dash uses an optical touch sensor to detect taps and swipes for controlling all of its features (music, activity tracking and feedback, audio transparency, etc.). This works pretty well when you’re stationary – but the actions of swiping, tapping and holding the earpieces while moving become significantly more difficult.

The problem is that the entire control interface of The Dash revolves around touching it in some way that could break the seal and completely undo any sort of “PerfectFit” you managed to achieve. Even then, because pretty much every part of the body is bouncing around, the wrong commands are often sent.

A gesture-based interface would work with no contact. Even my phone can pick up when I’m waving at it, so that should be possible for core commands.

Audio Transparency

The idea behind allowing ambient noise to pass through to improve situational awareness and safety, is a good one. The Audio Transparency of The Dash works well… when you’re not moving. Once I started actually moving quickly, I found that the Audio Transparency pretty much picks up and amplifies all the sounds around you. Including wind. If you want to see what it sounds like running through a tornado, turn it on while running. I didn’t get a chance to try it in the shower or while swimming, but I imagine it would sound like you were banging your head against a tsunami.

wind.png

Ahh… that nice breeze (You can find a hundred of these photos here)

So… I turned that feature off. An update to the firmware of The Dash that includes some audio filtering could help with removing the noise. I could barely hear my music over the amplified wind.

Heart Rate Monitoring

I’m not sure how The Dash measures heart rate. The device has told me numerous times to adjust the earpiece to get a better recording but I was not willing to compromise a good seal if I have one. Sensors listed on the product page include the optical touch sensor, and 3-axes accelerometer, gyroscope and magnetometer – none of which can really be used to measure heart rate. So that has me confused.

When I did have it positioned correctly I felt like the heart rate quoted to me was lower than expected, and I suspect it’s influenced by cadence. Unfortunately, I don’t have an actual heart rate monitor to compare against. However, based on my pace, how I felt and my experience with heart rates, I am quite confident the values quoted to me by The Dash were significantly off.

Battery

The really cool thing about The Dash is that its case doubles as a charger. Supposedly this can restore both earphones to full charge up to five times. Combined with the play and track time of approximately 3 hours and the standby time of 250 hours, you would think that you wouldn’t need to charge The Dash that often.

I don’t know how they came up with those figures but my experience was way off. For some reason, I had complete discharge of the charger and earphones after a week of non-use. And from full charge I got the audio low battery prompt after about 50 minutes of running.

battery

Think my battery may have been leaking (Source)

I probably need to test battery performance more thoroughly, but those were my initial findings.

Miscellaneous

Just a couple of minor gripes. It took me a while to get my phone to pair with The Dash over Bluetooth. I don’t know which device’s fault it was, but finding and eventually connecting to them took several attempts. Once connected, I again hit a wall in trying to connect The Dash to the Bragi Android app, and after several attempts, I gave up.

I did not like how cadence was reported as a total. This isn’t a Fitbit – knowing I’ve taken 2,417 steps over the last 15 minutes is not a useful metric. Steps/minute would be far more informative.

Conclusion

I’ve neglected to mention many of the finer selling points of The Dash, and instead focussed on disappointing features. I would honestly be perfectly happy with something that just stays in my ears and plays music on shuffle. I don’t even need the ability to change tracks or store more than 50 songs.

For the price point and features offered by The Dash, I would expect them to work well – and that is my main problem with it. The impressive list of features that I’d rather not use because of their sub-par performance, makes them redundant and means the near 600 NZD final price tag is far too high.

The Dash is a well-designed and well-intended piece of hardware, but lacks a little in the execution and testing with actual, active users. It may be the world’s first smart wireless earphones, but The Dash would need to get a little smarter before I’d consider buying it. Perhaps fewer features done better would be a good start.

I’m looking forward to either firmware updates, or Bragi’s next version: The Dot.

(haaaaaaa, I’m so funny *cries*)

 

P.S. Does anyone have a pair of running earphones they swear by? Absolutely must stay in, and have local storage for music. That’s pretty much all I need.

Tired runner

Running Data: Damn Nike – Part 2

In Part 1 of this series I gave a “brief” outline of my journey through running and how I ended up using particular training tools and apps. The purpose of this post is to highlight an issue that is important to me, but echoed by a startling minority. The scary part is that it basically affects anyone who uses a device to track fitness, activity or workout data – and yet the number of people table-flipping, making a fuss or even aware of the issue, is depressingly low.

I am talking about data access (or ownership). The Fitbit on your wrist, the GPS watch you’re using for your runs or even your smartphone, are all recording data that you generate and send to companies who analyse the data and give you interesting insights on your activities. They can tell you how much REM sleep you get at night, give you an overview of how active you were during the day, or summarise how well you did on a training run (by providing information on splits, pace, cadence and/or heart rate).

Werable devices on wrist

Wear all the wearables! Source.

In the rapidly growing industry of wearable technology, companies are trying to add as many analytics and features to their devices as they can. Unsurprisingly, their shotgun approach can still not provide the specific insights you’re after. But it’s ok, you can just dive through the data yourself and easily pull out the metrics you need or are interested in, right? NOPE. What, not even simple data you could look at using the built-in functions of Excel? HELL NOPE.

And this is the problem. The data is there. You’ve created it – it’s a quantification of you and it’s just sitting there. Sometimes, you can jump through certain hoops to get it, and other times you have to drag yourself over a mile of broken glass… using only your face.

Man screaming in frustration

The frustration (or pain) is real. Source.

The problem can be split into two interacting parts:

  1. Current apps don’t provide analytics or metrics that are of interest to you, or if they do they charge you for it (these can vary from really basic information to advanced analyses).
  2. These apps make it difficult or impossible to export your data.

The first part wouldn’t be so bad if not for the second. The only reason I can think of for companies to make it difficult to take data out is to force platform (and brand) loyalty. This is complete and utter bullshit. If a company’s product is good enough to solve any issues I would have with problem #1 then I wouldn’t even want to extract my data. Basically, the company is aware their product is below-par and the only way they can keep people using it is to lock them in. That is not sustainable.

I’m going to talk about the running apps I am familiar with and you should be able to get an idea of the scale of offending that is out there.

Runkeeper

Runkeeper is actually the best out of the three when it comes to exporting training data. Reason being – they actually let you. You can export all your data within a custom period to a .zip of GPX files (these contain GPS data) which can be easily imported into other platforms. Fantastic! The metrics and analyses Runkeeper provides are quite basic. Individual workout data is fine – but if you want to look at how you’ve gone over time (to see how training is going) the only thing you can look at is distance. Duration, speed, pace, heart rate, or elevation? Bah, who needs that info? Unless you want to pay for it (30 USD a year)! No thank you, but I am grateful they made it easy to get all my data out (‘Export Data’ is in the ‘Settings’ menu), and do not contribute to problem #2.

Runkeeper logo

You’re doing good Runkeeper. Not fantastically… but good.

Endomondo

Similar to Runkeeper, Endomondo requires a subscription (29.99 USD a year) for you to see any type of data that includes more than one workout. However, they also contribute to problem #2, by limiting data exporting to one workout at a time (manually). I have 152 workouts on Endomondo so uh… if it takes me about 2 min to download one workout (navigating to the workout and saving it)… that’d take me… five freakin’ hours. That’s the thanks I get for subscribing for a year.

See, most people wouldn’t bother trying to get there data out so in a way their strategy of locking people to their platform must be working to some extent. However, as soon as I found out I had to continue my subscription to see detailed analytics, I was ready to jump ship.

Fortunately in the case of Endomondo, there is an easy solution. Endomondo Export is a handy tool that allows you to bulk export your data with an easy-to-use interface. I did have some problems with missing elevation data in the GPX file outputs, and wrote a Python script to help clean these. I suspect the data was bad because of my phone (Galaxy SII), but if anyone else tries to use the export tool and has problems let me know, and I’ll see if my script can help you out too. If you are a bit more technically inclined, there is also this Python script but I cannot verify its effectiveness.

Bear in mind that Endomondo can change access to their API at any time which could make either of these tools defunct. I would suggest you create backups now, while you still can – otherwise you may find yourself in the same situation as I did with the final app…

Apparently this hand gesture indicates "meh".

Apparently this hand gesture indicates “meh”.

Nike

I use a Nike+ GPS Sportswatch and I love it. However, for a 100 billion dollar company whose market ranges from suburban mums to the world’s most elite athletes, the design of their website and app is absolutely terrible. For the few years I’ve been using it, there have been no significant improvements in the user interface design or functionality of the website, which in this fast-paced world is totally crap. I’ve had problems logging in, maintaining sessions, viewing activity data, and browsing friends’ data. Admittedly, it’s all free – but that is not a valid excuse. Individual workout metrics are fine and over time metrics include things such as average distance and pace changes. Still not as in-depth as it could be, but sufficient for most people.

That being said, my recommendation is to stay away from Nike+ and its Running app. This is the absolute worst example of a company trying to lock you into their platform. You absolutely CANNOT export any data – not even individual workout data. Previously, you could use third party tools similar to Endomondo Export that would allow you to bulk export workouts from Nike. However, Nike have recently shut off access to their APIs except for official “Fuel Lab Partners” or whatever they’re calling them. Basically, other companies that Nike are working with because despite their wealth and resources, they can’t build a decent app on their own.

The closing off of their API has meant that the third party tools I used no longer work, and the developers of those apps have retired them or stopped updating them to keep up with Nike’s changes. I have figured out a way to still get my data out and export it to my platform of choice (RunningAhead: I can’t say enough about how good it is, even if just to park data). At the moment, the process is a little convoluted, but I am interested to know whether there are people who do want to get data out from Nike. If there are, I will try turn it into a web solution – or at the very least, post how to do it and wait to see how long it takes Nike to do something to prevent or block it.

In the meantime, I’m syncing my data to RunningAhead and saving up as fast as I can for a new running watch.

Ironically, the Nike tick showed up when I image searched "good". Fortunately, I was able to find enough fingers to give it.

Ironically, the Nike tick showed up when I image searched “good”. Fortunately, I was able to find enough fingers to give it.

 

Running Data: Why Is it an Uphill Battle? – Part 1

I originally started writing this post in October 2015. Life has been quite hectic since then, so I didn’t quite get around to publishing it. Recent events have caused me to revisit it, and I think some of it may still be interesting for people to read. I will be following this post up with an up-to-date rant and a couple of other running-related posts.

I have the attention span of a gnat, and as a result I am abysmal or at best mediocre at most of my pursuits. However, two anomalies to this trend are running (inexplicably) and data analysis (pretty much my job while working at Uni).

I picked up running about 5 years ago and when I first started I neither followed a training plan nor logged my workouts. However, after a few years I was becoming more and more addicted to the endorphins and began to get more serious. On the recommendation of a friend, I started using Runkeeper with my phone to track runs. At the time, its popularity stemmed from its integration with Facebook, which made it really easy to find friends and quietly judge them for not running enough.

Image of a Woman Being Judged

What silly clothes to run in too. Source

My training plans up to 2013 had always been pretty much, “run lots”, and I was getting faster mostly because I was building the muscles used in running, which up to this point I’d never really developed. Another friend recommended Endomondo to me, and I was intrigued by its offering of a dynamic, “smart”, training plan. I paid for the premium service and had a plan that gave me pacing indication, distances and structure to my training runs. It would also adapt my workouts if it found I was maintaining faster/slower paces to what it recommended.

As smartphones counter-intuitively increased in size, I could no longer tolerate running with mine (a Samsung Galaxy SII) and switched to a Nike GPS watch for tracking my runs. I’m really happy with the change and if you’re serious, I highly recommend investing in a decent watch. It beats lop-siding your gait by strapping a brick to your arm or the struggle of fitting it into a waist belt. The workout data from my watch is manually synced to Nike+, a platform that stores activity data from all of Nike’s devices and Running app (which gives the same experience as the watch but uses a phone’s in-built GPS).

I continued using the Endomondo training plan with the Nike+ watch, and herein began my initial foray into the war of fitness data platforms. It seems to be that most platforms, in addition to the ones I’ve used, such as Strava and MapMyRun, try and force user loyalty by making it either impossible or incredible difficult to move data around platforms. This meant that the data from my Nike watch could not be easily taken out and uploaded into Endomondo to contribute to my training program tracking.

In fact, Nike+ has no option to export any data. I was enraged, because it’s my data! It appears that many others shared my feelings and as such, have made tools* to export the data through the Nike+ API (which is primarily made available to Nike partners). Using these tools, I was able to keep my training plan up to date.

Image of locked folder

I wish this was how my run data was locked… so secure. Source

More recently, I have discovered other apps, such as SyncMyTracks, which are designed specifically to sync data across multiple fitness platforms – but these usually come at a cost. They’re also vulnerable because they’re not partnered with platform owners and changes in their code or interface could easily shut them out.

Endomondo’s training plan served me well for a couple of years, but I didn’t really like the unpredictable changes it would make to my program. For example, tell me I have an 8 km run the next day, but on the day change it to 18 km. I also began to feel like I had gotten the gist of the training it was trying to get me to do. I figured I could design and follow my own program and cancelled my subscription to Endomondo and retained a free account. What I didn’t know, was that I would also lose access to much of my previous run data and this is what infuriated me the most.

Endomondo would allow me to view distance analytics over the history of my runs, but any other information, such as pacing, speed, elevation, heart rate, and even number of workouts, required a premium subscription. I can kind of understand their stance from a commercial perspective as it must be very costly to calculate the average pace over distance data I provided them (!), and I therefore must pay USD30 a year to see it… The data is there! It’s been calculated! So it really pisses me off that there is no real additional cost to them to show me my data.

And to top it off. What if I think screw Endomondo, I’m going to take my data elsewhere? Is there an inbuilt export tool I can use to take the data that I provided back out? Of course not. I have to rely on another tool created by a third party that could again, stop working at any moment.

Furthermore, what happens if I switch out my Nike watch for a Garmin, TomTom, Timex or Polar? These have their own platforms too so am I just going to have to cross my fingers and hope that these third party tools exist for them all? I’m not sure what the answer is but I’m hoping to mitigate it by putting all my data into a single platform: RunningAhead. This is absolutely free for tracking all kinds of workouts – not just running. No tricks or gimmicks and analytics are provided to you in detail comparable to all the other platforms. You can import a whole range of training files and store as much data as you want. If you want to backup or export your entire training history, you can. The only downside is the design is a bit dated but that’s of little concern.

Image of a RunningAhead screenshot

Form follows function, just the way I like it.

I’ve already uploaded all my workouts from Runkeeper, Endomondo and Nike+ into RunningAhead. I’ve also tried to streamline future uploads as I continue to use my Nike watch so that I don’t have to manually upload a run each time. If you’re interested in how I am trying to do this see my follow-up post. I’m also planning to share a bit more about the current tools I’m using for training and managing my running data.

 

* Due to changes in the Nike+ API, many of these tools no longer work. See upcoming post for the rage this has induced.

 

Machine Learning: A Social App Example

I was recently invited with Thought-Wired to give a talk at Hobsonville Point Secondary School, an institution leading the way in modern approaches to learning and education in Auckland. The facilities and staff are so mind-bogglingly awesome, I always spend most of my time there lamenting my own educational experience. Case in point, I was there to talk about machine learning, and its relevance to artificial intelligence. This was a subject that I did not learn about until the final year of my Bachelor’s and these students were about to learn about it before having sat NCEA Level 1 [so so jealous].

For my talk, I ran through a real life example and demo of how machine learning works. This post is aimed to supplement that by going through the example in a bit more depth and cover some points I missed in my talk. Hopefully, it should be understandable to high school students, but if there are sections that aren’t clear, or if I’ve fallen into the abyss of academic vernacular, leave a comment so I can fix it up.

Problem

I initially had problems trying to come up with an example of how to explain exactly what machine learning is, and furthermore how to demonstrate it engagingly. I found a post that explained it quite well using shopping for mangoes as an example, but didn’t feel this was relevant or interesting to a high school student (check it out though, it does provide a pretty good explanation). Somehow, I arrived at the conclusion that a live demo to show an actual machine being trained using real data was the best way. In hindsight, this could have been a terrible idea, but I believed even if the results didn’t turn out the way I expected, something could still be learned from them.

The use of social media is pretty much ubiquitous in our lives, whether you’re an adult or a high school student. However, are we using the same apps? How easy is it to differentiate a student from an adult based on the social apps they use, and if it is difficult: could a machine be taught to tell whether a person is a high school student or adult based on the social apps they use? This machine could be able to identify underlying patterns in the data that might not be observable to a human.

Data Collection

As I mentioned earlier, I foolishly wanted to use real data, so to create my data set for adults I quickly whipped up a Google form and sent it out through the social apps I use asking for responses. Specifically, I wanted people aged over 15 (criteria for an “adult” in this case), and a list of what social apps they used. I also provided a text field for participants to proffer other apps that I might have missed.

With limited time, I managed to collect 40 responses with an average age of 30.1 (s.d. 8.6) years. The eventual list of apps I arrived at was as follows:

Twitter Facebook Pinterest
Instagram Snapchat Tumblr
LinkedIn Vine Whatsapp
TapTalk Periscope Google+
Kik ooVoo Yik Yak
WeChat Viber Skype
Ello Telegram Beme

While I was setting up for my talk, the students were sent a link to a new form to fill it out. This form consisted of the same list of apps, and no field to suggest new apps. The responses to this form became the student data set, and luckily, there were exactly 40 responses with an average of 14 (s.d. 0.8) years.

The following bar chart shows a summary of everyone’s responses. Note that to us, as humans, we can see that TapTalk is used by neither student nor adult and is therefore useless as a feature. LinkedIn, Periscope, Ello and Telegram are only used by adults (but not all adults) and would be a good indicators of adults. Similarly, Skype seems to be much more popular among the students and could also be a good indicator, but for students.


Summary of Respondent Data

The Machine

The type of machine I used was a binomial logistic regression classifier. This means there are two kinds of output: student or adult (binomial), it uses curve-fitting techniques to learn (logistic regression), and is able to sort samples of data into categories or classes (classifier). The machine will learn from data fed to it by adjusting certain coefficients or properties that affect how much influence particular features have.

A feature is a single property that can be measured, e.g. does a person use Facebook: Yes or No. In this case, the features are the usages of each of the social apps. Combining all the features of a person creates a feature set and the corresponding type of person (adult or student) is the class. Every feature set will have a corresponding class. An example of a feature set would be a person who uses Facebook, Instagram, Snapchat and Skype, and since this person is a student, their class is student.

I used the feature sets of 25 adults and 25 students to train the machine, and the remaining feature sets were used to test the machine’s ability to predict the type of person based on the apps they used. The purpose of testing is to see how well the machine has learned by presenting it with feature sets it has not seen before and seeing how well its predictions or outputs compare to the actual classes.

Results

Training the machine took less than a second and a total of 5 iterations. This meant that it took 5 adjustments of the machines coefficients to get its outputs to match actual classes with acceptable accuracy. In this case, the machine was able to learn to the point where its output was 96% accurate, i.e. from the training data, the machine only got 2 people’s classes wrong.

The real test comes from the new data – the 30 feature sets that the machine has never seen before. For this, the machine was 87% accurate in predicting a student or adult. A confusion matrix can be used to help visualise the results and identify where the machine lost some of its accuracy. The confusion matrix for the data is shown below:Confusion

This is a very useful tool for comparing and analysing machine learning ability. Output Class is what the machine predicted, Target Class is the actual class, and 0 and 1 represents adults and students respectively. The grey squares represent linear totals (e.g. machine outputs of adult were correct 73% of the time, while student target classes were identified 79% of the time) and the blue square is the total accuracy. If we look at the top left green square, here the machine predicted adult and the actual class was adult (the two 0’s). This happened 11 times.

In the red square to the right of it, the target class was 1 or student, but the output class remains as adult. This means the machine got it wrong, and this happened 4 times and is the main contribution to the loss in accuracy. In terms of predicting students (the bottom right green square), the machine did this correctly for all of them, resulting in 100% accuracy as shown in the grey square.

It may take a while to get your head around the matrix, but it’s a very useful took for quickly visualising results.

The other component of the machine we can look at is the actual values of the coefficients that were adjusted within it. How these are applied will be difficult to explain (I barely understand it!) but put simply, the higher their magnitude the more important the corresponding feature is to the machine in determining its output.

In this example, all coefficients were 0 (meaning not used at all), except for four features. These features and their coefficients were:

Facebook 0.0877
LinkedIn -2.1244
Whatsapp -1.3608
Skype 2.3353

What these values mean, is that the machine relied heavily on the LinkedIn and Skype features to tell whether a person was a student or an adult. It also used Whatsapp to a lesser extent and Facebook a little bit. These pretty much support what we thought at the beginning by visually inspecting the data, meaning that in this case a machine is not that much advantageous over a human. These features make the identification quite easy because there is such a big difference in usage when it comes to these apps.

So what happens if we get rid of these features and train the machine without them? As a human, can you tell which features will be important now, and how much?

Taking out LinkedIn, Whatsapp, Skype and TapTalk (since no one uses it anyway) resulted in 66% training accuracy and 60% test accuracy. Woah, that’s a pretty big drop! Here’s the confusion matrix:

And coefficient values:

Twitter -0.6108
Facebook -0.00014
Pinterest 0.3066
Instagram 0.3714
Vine -0.2109

It’s immediately evident from the confusion matrix where the machine is going wrong. It’s getting students wrong more often than right (only right 40% of the time), however it’s still doing pretty well at identifying adults. From the coefficients, we see that there’s no real strong feature any more. Instead, the machine has to look at patterns mostly within Twitter, Pinterest, Instagram and Vine usage to figure it out. Did you see these coming? You might have been able to name some apps, but figuring out their relative importance might have been trickier. In this case, the machine might have the upper hand over a human.

Discussion

An overall accuracy of 60% is still not too bad because it’s greater than chance. The machine is still performing better than if we were to try figure out the type of person based on a coin toss, which at least demonstrates it has learned something. There are also ways we can look at improving the results of the machine:

  1. Probably the easiest is to add the highly differentiating features like Whatsapp, LinkedIn and Skype back into the mix
  2. We could also collect more data from people to increase the size of the training data set
  3. Expand the number of features available by including more in the survey, such as gaming or fitness apps (however, sometimes too many features can be a problem and the machine can be overwhelmed)
  4. We could improve the learning algorithm
  5. We could use a different learning algorithm

This was a very simple example of how machines can learn from real data, and I was very happy the live demo worked well enough to show some actual learning. I used social apps to try and be relevant but consider this: if we change that around, and look instead at certain websites people visit, or the types of videos they watch, couldn’t we also train machines to deliver specific content or suggest similar websites or videos to check out?

This is the essence of targeted advertising and content. Have you ever gone to a website and found it conveniently showing an ad for something you actually do need? It’s not luck. Something, somewhere, has tracked what you’ve been doing and based on what it knows about similar behaviours has classified you as a likely target for this particular ad.

Scary? Maybe. However, the potential convenience it offers could be worth it. After all, similar ideas are used for auto-complete, predictive text, suggestions on YouTube and similar short-cut behaviours. Machine learning is everywhere, whether it’s obvious to you or not. Have a think about where you might have missed it lurking, and whether something like Judgement Day is a possible threat from machine learning, or is there something else we need to be worried about? 😉

Waze: The Best Navigation App You’re Not Using

Face it, Auckland traffic sucks and it’s getting worse. The current work on our traffic infrastructure and investment (or lack thereof) into public transport will not have tangible effects for years to come; all the while, people will continue to spend countless hours getting well-acquainted with the rears of the vehicles in front of them.

Image of a Rodius

A ghastly rear to be acquainted with. Source

Empirical knowledge only helps to avoid traffic when destinations and times are familiar. For new trips Google Maps (and navigation), which usually provides the most direct route and an estimation of traffic conditions, could be used. The traffic conditions are based on GPS data Google pulls anonymously from cell phone users and the amount of data must be tremendous. This will likely affect how fast the data can be processed and updated, meaning that changes in conditions are not quickly updated. Furthermore, there is limited information on why traffic might be like it is, which reduces the ability to plan. For example, traffic caused by rubberneckers can clear relatively quickly, while roadworks can cause more permanent delays.

Wouldn’t it be better/faster if someone stuck in traffic could say to Google maps, “traffic is currently [expletive] here because of [reasons]”, and the traffic information at that location is then instantly relayed to other drivers considering taking that route?

This is the essence of Waze: a crowd-powered traffic navigation app that was in fact acquired by Google. Its maps are charted by the community, meaning it remains up-to-date and provides more accurate mappings in less accessed locations.

The Power (Pros) of Waze

Waze functions much like any other navigation app. You type in where you want to go and it figures out the quickest way to get there. It even has other map APIs built in, such as Google’s to pretty much guarantee it’ll be able to find the destination you’re searching for. What sets it apart, is the user’s ability to log the following information which is made available to other Waze users (Wazers):

There are options to provide all sorts of information

Report all the things!

Traffic

Stuck in traffic? Log it through the UI (less safe) or by using voice commands (totally safe). Waze will then mark the stretch of road you’ve been travelling as having traffic and the approximate speed. This can be done for varying traffic conditions, and you can even do it for opposing traffic. Once you’re past the slow zone you can even log the cause, such as an accident, breakdown or hazard.

Construction/Detours

Temporary hazards or effects on routes, such as detours and road closures can all also be logged within Waze. Combined with the traffic data, this allows Waze to adjust the routes it suggests when navigating to reduce journey times.

Police Checkpoints

Similar to current GPS navigators, Waze also has fixed speed and red light camera locations and warns you if you’re approaching them. In addition, users are able to add locations of speed traps and checkpoints of marked and un-marked vehicles to the maps.

Petrol prices

Waze also tracks petrol prices. I find this useful when looking for a place to fill up in an unfamiliar area and need something quick and cheap(er).

Social Features/Integrations

A navigation app shouldn’t really need to be social, but Waze gives you points for using it and bonus points for reporting traffic and hazards. As you level up you can unlock different avatars to use on Waze, which is how other Wazers will view you on the map. You don’t have to be visible on the map, but doing so will allow you to interact with other Wazers if you want to. A more useful function is the ability to share your drive with friends so they can see where you are currently and your ETA.

Not only that, but Waze can pull meeting location data from your calendar, and you can send automated text alerts to your co-conspirators so that they can tell how far away you are from the rendez-vous or whether you’re going to be late. They don’t need to have Waze to receive text alerts and even have the option of viewing an in-browser map of your route to track your progress in real time.

Uhhh, what're you doing on K Road at 2 am buddy?

Whatcha doing on K Road at 2 am buddy?

Limitations of Waze

Waze is fantastic and I use it whenever I need help navigating. My main problem with it is that sometimes the routes it suggests are not optimal and can be a little counter-intuitive. Sometimes this works out really well (Cool, a new faster route I can use in the future!) or not so well (Where the feck are you taking me?!). So for now, I have to use a bit of common sense and discretion in conjunction with the navigation provided by Waze.

I believe the main reason for this is a lack of user-provided data, particularly in New Zealand. Hence this blog post! I am hoping that some of you will try it out, and see its potential. Waze will only be as powerful as the number of people using it so go and spread the word – it’s available on Android, iOS and even Windows Phone.

I am hoping that as the number of Wazers grows, it’ll be able to take advantage of machine learning/AI to better understand the routes it suggests. For example, it might realise, “Ok, it took you 10 minutes to turn right at this four-lane intersection so the next Wazer going this way will instead use a traffic-light controlled intersection further down”. Eventually it’ll turn into some high level traffic management system that autonomously guides flows of traffic everywhere with optimal travel times for everyone.

The other thing Waze could have is more voice commands. Traffic reporting is down to a cinch, but other hazards and commands need to be supported too – just to maintain safety while driving. I am quite confident that this will be added soon enough.

The Sweetener

All in all, Waze is awesome, and does everything your current navigation app does and then some. If you’re still not convinced, consider this. To help promote the latest Terminator movie, Arnie has lent his voice to Waze. So, if the idea of the T-800 telling you to, “Turn left if you want to live”*, doesn’t make you want to use this app, I don’t know what will.

"You've arrived at your destination... Get out!"

“You’ve arrived at your destination… Get out!”

 

* Disclaimer: Arnie doesn’t actually say things like this all the time (a huge shame!), but if he’s not your cup of tea you can also get instructions from Colonel Sanders or some NBA players (lol).

 

Myo by Thalmic Labs Review

At last! It’s time to write a commentary on something I actually may have an expert opinion on. Maybe. I received my pre-ordered Myo last week all the way from Thalmic Labs in Toronto. Interestingly, I only got it a month or so after people I knew who had pre-ordered it significantly before me (around the time it was first announced). Goes to show it is possible for tech companies to sort out their production to meet current demand (*cough* OnePlus *cough*).

The Myo. Source: Thalmic Labs Inc.

The Myo is an armband or chunky bracelet that sits high on the forearm, just below the elbow. It allows the wearer to control software applications and physical devices through movements that are recognised by its 9-axis IMU (combined gyro, accelerometer and magnetometer). Its unique feature is the usage of the electromyography or EMG signals of the muscles in the forearm.

Basically, the Myo is able to detect different forearm muscle contraction patterns and link them with motions such as making a fist, spreading the fingers or pivoting the hand. Such recognition can occur without the forearm (and Myo) moving at all. Combined with the IMU, this provides the Myo with the ability to sense control gestures and actions of the hand and fingers. Easy parallels can be drawn to the gloved system in Minority Report or the gesture-based computers at Tony Startk’s disposal in Iron Man and Avengers.

Tom Cruise in Minority Report and Robert Downey Jr in Iron Man. Both using natural interfacing methods. Sources: Bit Rebels and Astronaut

Tom Cruise in Minority Report and Robert Downey Jr in Iron Man. Both using natural interfacing methods. Sources: Bit Rebels and Astronaut

I did my postgraduate studies on using the EMG signal as a control method for prostheses or exoskeleton devices – a slightly more difficult problem because the aim was to provide continuous control, for example reproduce an intended trajectory, which requires constant monitoring for changes in muscle activity of antagonistic muscle pairs. The Myo is slightly more simplistic (and therefore consumer-ready) because it only needs to recognise particular snapshots of muscle activity. And it does, really well.

Pros

For a new technology, Thalmic have done an extraordinary job in facilitating the process of getting the Myo out of its nicely and efficiently packaged box, onto your forearm and recognising control gestures. Instructions are very clear, and it’s the little things, like checks to make sure the Bluetooth dongle is attached properly and demonstration videos that exemplify each step, which really make the difference. Any Luddite would be able to do it – and that signifies a fantastic user experience.

The Myo itself works. There is no over-promising of its ability to recognise gestures. Out of the box, allow it 2-4 minutes to stabilise and you’ll be able to use it immediately. The gesture library is currently quite limited but there are a few to choose from:

Gestures

The basic Myo gestures. Source: Thalmic Labs Inc.

Obviously, the muscle activity required to produce these gestures are the easiest to differentiate, but it shows Thalmic have carefully considered what gestures to use and how to use them. This is reinforced by the “wake-up” command (double tap of the thumb and middle finger), which tells the Myo to start listening for a gesture. The feature prevents false positives, or unintended gestures from carrying through to the device being controlled.

Combinations of gestures can be used to generate a secondary tier of commands. For example, an opened hand followed by a clenched fist could generate an action on its own. This opens up further possibilities for control and actions once a user becomes proficient at stringing different commands together.

The Myo itself is comfortable. I’ve worn it for longer periods of time (20-30 min) and have not yet experienced discomfort. It also comes with adjustment clips for if you have smaller forearms. I don’t think there’s an alternative for larger forearms, but all I’ve had to deal with are those skin depressions that fade after some time. I’m hoping to eventually see how it feels after an hour or more of use.

Thalmic has its own Myo app market, called the Myo Market. It contains the “connectors” that are currently compatible with the Myo and allow you to use it to move your mouse, manipulate your web browser (all the good ones are supported: Chrome, Firefox and Safari), change presentation slides or control your media player (Spotify, VLC, iTunes and WMP are all supported). The connectors allow you to integrate Myo with a host of other applications and games, such as Minecraft, Popcorn Time, and even Saints Row IV, but I have yet to test them out. There’s even a connector for Trello, which at the time of writing is the only productivity tool supported.

Cons

The Myo is relatively new, and although their developer program has been active for a while (still a little butthurt I didn’t get in…), I expect more and more connectors to come out soon to further populate the Market. At the moment, I feel like there isn’t enough functionality for a user to completely integrate the Myo into their interfacing experience. It’s not much use when you can only use it for certain (and a few) apps.

My other main issue is that there’s no over-arching system that ties all the connectors together. For example, to use the Spotify connector, the Spotify window has to be in focus. I therefore either need to alt-tab or mouse click the Spotify window before I can perform the gesture that I want. Cutting the Myo out of the process is faster and easier. However, if I can tell Spotify specifically to listen for the next gesture, and other apps to ignore it, I won’t need to disrupt my current workflow. To me this seems like an app selector is needed that could be like a weapon selection wheel commonly found in games where the protagonist can carry a gazillion weapons. I’m going to take this idea to the developer forums to see if it can gain some traction.

The weapon selection wheel in GTA V. Source: GTA Wiki

As I said before, the gestures are quite simple, and I expect additional base gestures to be added as the recognition algorithms of the Myo are improved. More complex movements will allow more to be done and faster. At the moment, because the gestures are focussed on movements emphasising polarity, it can get quite tiring quickly when a multitude of gestures are used in sequence. Even more so, when such range of motion of the wrist isn’t often used.

The IMU works well and can accurately sense movement. Its main function so far is to operate as a dial when rotating the forearm (for example as a volume knob), or as a panning tool (for example moving a cursor). The problem I’ve found, especially with the panning, is that when this is combined with a gesture, there is quite a bit a drift. This makes it difficult to use the features in tandem with gestures. I gave up trying to click on buttons using the mouse connector. Some sort of compensation is required.

It is also limited by its position close to the elbow. The data available to it isn’t as rich as it would be if it was around the palm for example. This is particularly evident during rotation, as the wrist experiences more natural rotation than the upper forearm. Getting the upper forearm to rotate is slightly awkward without straightening the elbow. I am however, interested to see how this might be overcome.

Finally, if you watch the promotional video of the Myo, and compare it to what you can do now you’ll find that some of the applications were only ideas. While definitely feasible, not all of them are available yet, such as smart home or robotics control. However, I believe support for them are coming – just don’t get your hopes up too early.

Conclusion

The Cons are really points of improvement, and Thalmic and the developer community are probably working hard to address them. The Myo is certainly an innovative product and has the potential to reshape how we interact with technology. Much of its success depends on developers, and on how they exploit its features to provide a seamless, intuitive and natural interfacing method. I especially look forward to more connectors being developed for productivity apps. Hopefully I’ll have the time to provide some input, as I do want to see it grow and improve. I really want to be able to wear it constantly and have my environment respond to my wishes without need for visual displays or feedback. Not only that, but I believe the Myo has the potential to really enhance our existing interfacing experiences. The hard part, is figuring out how.

Bonus clip: Demo of using the Myo to control the Parrot AR.Drone quadcopter. The implementation is basic (leverages mostly off the IMU data), but it’s a good example of where the technology could be used.

Inbox by Gmail Review

I’ve been using Inbox by Gmail since late October and I am hoping I will never need to go back to old school Gmail… ever. In case you don’t know, Inbox is Google’s latest attempt at controlling how we receive and manage email, and it has been met with mixed responses from the internet denizens. In this post, I will share my experiences with it and explain why the approach Google has taken seems to be in the right direction for me, an organisational nightmare with constant backlogs of email up to my ears. Inbox is able to organise email and has advanced search features that can help to keep track of the swathes of information, documents and files that inevitably accumulate over time. Note: When I refer to “Inbox” I mean the application as a whole, and by “inbox” I mean the inbox within Inbox.

Shown below is a screenshot of the main view provided by Inbox in the web application. I mostly work on a PC during the day so am using the web application of Inbox more often. I use the app on my phone or tablet only when I’m out prowling the streets. Immediately noticeable, is how clean it is – a single column of content that displays emails in discrete time blocks – e.g. This month, Yesterday and Today. Attachments are immediately accessible and actions (described later) are available for each email. The top left button provides a comprehensive menu with Inbox’s additional functionality, and if desired a Hangouts panel can be opened and pinned. The advantage of this clean layout is that it is easily reproduced on mobile devices so there is no difference in interaction, whether you are on a phone, tablet or desktop computer. And the best thing? No ads!

Screenshot of the Inbox Desktop App

Screenshot of the Inbox Desktop App

Inbox tackles email in a noticeably different way, and to me, this is the crux of the innovation of the approach behind it – other reviews seem to have missed this concept and complain that it does not facilitate the way they’re used to emailing, or that adapting to Inbox is a chore. Change is almost always met with some resistance, and adjusting may require some habitual fixes, but that being said, Inbox is not really blowing the concept of email right out of the water. Instead, Inbox gently reshapes emailing by presenting it to the user as a more manageable To Do or task list, instead of purely as a messaging system. The layering of new terminology, such as replacing “archive” with “mark done” is not only superficial, but also includes the underlying framework to support task management processes. These features are summarised as follows:

Inbox Actions:

The consideration of emails as tasks is most apparent in the available actions provided for each email. Besides the usual Reply, or Forward, Inbox provides the buttons shown below:

Main actions for emails, from left to right: Pin, Snooze, Mark Done, and More

Main actions for emails, from left to right: Pin, Snooze, Mark Done, and More

Pinning an email marks it as important, and requiring further action. In doing so, a Reminder can be added to the email which shows up under the Subject heading as a cue to the reason the email has been pinned (e.g. to reply to later or for future reference). All pinned emails can be viewed instantly through the toggle located on the top menu bar. Once emails are marked as done, they are automatically unpinned and removed from the inbox. However, for some reason, pinned items cannot be removed from the inbox and retain their pinned status. This means that pinned items can only be removed from the main inbox view when marked as Done, which seems counter-intuitive since they can still add to the visual clutter – just like in regular Gmail.

Example of a Reminder on a pinned email. The icon and note are visible beneath the subject.

Snooze is a new feature that mimics the snooze feature of an alarm clock. The email being snoozed is removed from the inbox and redelivered at a specified time (any date or time). This emulates some of the features provided by third party extensions for Gmail and makes it much easier to come back to emails that require an action. Having the email delivered at a later time means that it no longer clutters your inbox and you don’t have to worry about remembering to reply.

Marking emails as Done again emphasises the task-oriented nature of the emails. Instead of marking them as ‘Read’ or archiving them, the user indicates that they are done with it and no further action is required. This complements the other activities: pinning – i.e. to do; and snoozing – to do later. Emails can be marked as done individually, as a Bundle or for the entire inbox (Sweep). When the inbox is swept everything is removed except for pinned items. Snoozed items will be delivered as if it were a new email.

Bundles:
These essentially replace the Labels system used in Gmail. There are pre-determined Bundles (categories), such as Travel, Purchases, Social, Updates, Forums and Promos, which Inbox automatically sorts emails into. You can set up your own Bundles based on criteria you specify, similar to creating email filters. The main feature of Bundles is the ability to easily group similarly topical emails and specify a fixed time for them to arrive. For example, I have my Promo bundle sent to me once a day (at the moment, the time is fixed at 7 am, but I would eventually like the option to specify the time). This means I can glance through it quickly, and for the rest of the day I won’t be bothered by the tonnes of emails I get from daily deal sites, other retailers and promotional activities. I am therefore, not inundated or distracted by unimportant emails throughout the day – I only get them once, quickly sort through them and that’s it. Bundles can also be Swept, so the whole lot can be removed from view with a single click to maintain a clutter-free inbox.

The caveat with Bundles is that it may take time to set up. Existing filters are carried through from Gmail, but this may require some tweaking to make use of the Bundles provided and to create your own. I had previously made use of Google Labs’ multiple inbox add-on, and approached bundling by deleting all my filters to set the bundles all up from scratch. For the first week, I also had new Bundle emails delivered as they arrived so that I could check the sorting was happening correctly and I wasn’t missing anything important. Now, I’m not so worried about it and haven’t missed anything too important yet.

Search:

Inbox supposedly leverages off some powerful search abilities. I have yet to test this thoroughly, but so far it seems like past emails and resources are easily found and displayed for me to search through. The jury is still out on whether this is more accurate with a complex labelling system and keywords – for example, I found “labels:chats” really useful for dredging up information from conversations over Gchat and this behaviour is not reproduced in Inbox.

Summary:

My experience with Inbox has largely been on the desktop version as that is where I do most of my emailing. Many reviews focus on the mobile platform version – which I suppose is what it was originally designed for. I have gone full early-adopter and completely transitioned to using Inbox for all of my emailing, and noticed a substantial drop in the amount of time I spend sorting and clearing unwanted and non-productive email (more time for YouTube!). The task-oriented nature of the approach has aided me in keeping track of actions I need to take, and reducing noise and clutter. Right now, I have seven pinned items and a few snoozed emails that will be coming back to me over the next couple of months. On Gmail, I would be struggling to keep items in my inbox below 50 – a self-imposed maximum that dictated how many could be shown on the screen at a time (I despise having to look at a second page of emails – it’s like who ever looks at the second page of Google results?).

I’m hoping that Inbox doesn’t fade into oblivion like Wave and Buzz, and from that I’ve seen, the general response is positive. Come full release, I hope it remains ad-free, and would like to see improvements that allow pinned emails to be hidden, desktop notifications, more flexibility in managing hangout chat popups and chat history, and greater stability (infrequent crashes were experienced). The fact that the latest Gmail mobile application (released 6 November) actually looks quite similar to Inbox is encouraging because it would suggest a smoother transition (and potentially higher adoption rate) to Inbox. However, if you are already in control of your email and have an effective labelling/filtering system in place – Inbox by Gmail might just mess it all up and not be worth the switch.

I still have several invites for Inbox, so if you’re wanting to give it a go, get in touch. Currently, you’ll need a Gmail address.

This is Not the Greatest Blog in the World… Nor is it a Tribute

Hello and welcome to my blog. I am Dr J Pau. If you’re seeking medical advice you’re barking up the wrong tree. I wouldn’t know what the philtrum was if it were right under my nose. I took the easy route for getting my title and only had to slog through the process of writing a novel thesis.

My background is in mechatronics engineering, a hodgepodge of mechanical, electronics and software engineering, which culminated in an undergraduate degree from the University of Auckland. This left me with a generalist skillset with no specific field of expertise.

I have since diluted my skills further by pursuing doctoral and post-doctoral studies, both of which included a focus on biomedical engineering. This involved working with biosignals such as electromyography and electroencephalography, which are electrical signals generated from neural activity within muscle tissue and the brain, respectively. Currently, when people ask me what I do, I say I’m a biomechatronics engineer and spend the next 10 minutes breaking down and explaining the components of this excessively complicated portmanteau of professions.

For the past few years I have been an active researcher in my field, while keeping my eye on the development and growth of technology – particularly wearable devices at the consumer level. Now is an incredibly exciting time. The reduced development and prototyping costs, the accessibility to crowd-funding, and the commercialisation of research technology, have all contributed to what I like to think of as a flood of new-wave, potentially game-changing technologies that could significantly affect the way we live our day-to-day lives. Technologies (I don’t quite feel like calling them “products” yet) like the Oculus Rift, Meta, Totem, Leap, Myo, and more, are being released into the market and the developer community is playing a substantial role in determining their applications.

The main purpose of this blog is to provide my thoughts, reviews and informative commentary on these devices and technologies (when I can afford them). I offer my perspective, from a research and technology-enthusiast point of view. These are of course my opinions, so even if they are complete rubbish but spark some discussion or pondering, it’s a win in my book. I also firmly believe that technology is a tool that is best applied when solving problems. A great number of problems could be solved by taking existing technologies and applying them in different or non-conventional industries – the problem is in the lack of awareness, initiative and communication channels, but that’s another issue entirely.

The gaming industry is a good example. Any technology that enhances the gaming experience is likely able to be applied across multiple industries. Think like Microsoft’s Kinect. The multi-billion dollar gaming industry is a huge motivator for technology development, and the relative accessibility of new technologies (through APIs and SDKs) can make it relatively easy to provide a more positive impact from a societal or communal perspective.

Finally, I also believe that (technological) knowledge is still surprisingly inaccessible. While seemingly counter-intuitive, even though the internet has vastly improved access to information, it has also amplified the noise. I feel that physically being in New Zealand still has us isolated, even in the digital space, and attempting to distil through the abundance of information can be overwhelming. Maybe this blog can help with that process or at least raise awareness in a different capacity. One of my next posts will cover existing tech blogs in NZ and hopefully find you something to follow, if this blog isn’t quite what you’re looking for. No offence taken 🙂

P.S. In case you’re wondering, the title of this post is a reference to a song by Tenacious D. I occasionally do things like this.