At the end, I went out with The Inevitable, but there was something else. Finally, I decided to buy a guide for fast reading. Hopefully, when I come back here after a month or a year, I’ll be grateful for this decision.
If not? Well, currently I can’t afford to not try. There are so many amazing written things waiting to be read.
Two days ago Apple released iOS 11 (I wrote about it more in my summary of WWDC 2017). One of the big new features was ARKit – a new framework for augmented reality experiences. If you haven’t had a chance to get familiar with this, either as a developer or end-user, here is an amazing demo showed during WWDC:
Unreal Engine demo by Wingnut AR – Augmented Reality studio led by Sir Peter Jackson
Ben Chestnut, co-founder and CEO of MailChimp – marketing automation tool, talks about the history of the company. But also about management and cultivating creative culture. Video is full of smiles and inspiration, I highly encourage to watch it!
There are hundreds of tech conferences around the world and everyone is different. One of them, @Scale, is a series of events for engineers who build systems working in huge scale. Systems which handle traffic from millions of people, have extremely complex infrastructure or are maintained and developed by tens/hundreds of software engineers.
Not everyone or every company will be there. But some of us for sure. If you want to be ready (or you are just curious), I highly encourage to see video recordings from lates @Scale Conference which took place at San Jose Convention Center, 31st of August.
Binge-watching, also called binge-viewing or marathon-viewing, is the practice of watching television for a long time span, usually a single television show.
It’s Saturday. Sometimes you are simply out of fuel and the only gas station is your couch. So if there is no help for you and you need to spend all day (binge-)watching TV, here is my proposition to not feel guilty about wasted time tomorrow:
There is a discovery in the field of AI, called Moravec’s paradox which tells that activities like abstract thinking and reasoning or skills classified as “hard” – engineering, maths or art are way easier to handle by machine than sensory or motor based unconscious activities.
It’s much easier to implement specialized computers to mimic adult human experts (professional chess or Go players, artists – painters or musicians) than building a machine with skills of 1-year old children with abilities to learn how to move around, recognize faces and voice or pay attention to interesting things. Easy problems are hard and require enormous computation resources, hard problems are easy and require very little computation.
Researchers look for the explanation in theory of evolution – our unconscious skills were developed and optimized during the natural selection process, over millions of years of evolution. And the “newer” skill is (like abstract thinking which appeared “only” hundreds thousands of years ago), the less time nature had to adjust our brains to handle it.
It’s not easy to interpret Moravec’s paradox. Some tell that it describes the future where machines will take jobs which require specialistic skills, making people serving an army of robotic chiefs and analysts. Others argue that paradox guarantees that AI will always need an assistance of people. Or, perhaps more correctly, people will use AI to improve those skills which aren’t as highly developed by nature.
For sure Moravec’s paradox proves one thing – the fact that we developed computer to beat human in Go or Chess doesn’t mean that General Artificial Intelligence is just around the corner. Yes, we are one step closer. But as long as AGI means for us “full copy of human intelligence”, over time it will be only harder.
How to achieve long term goals? Continuously make small steps toward a success. If we use the power of habit, we’ll automate a process of getting better, every single day.
Do you want to decrease the number of times when you open social media from your device? Remove Facebook/Twitter/Instagram from your launch screen and make sure that you need to tap at least a couple of times to get there. Design something to make your good habits easier to achieve and add more steps between you and bad behaviors.
Do you have 25 minutes more? See these and many more hints from James Clear about how to be 1% better every day.
Yesterday there was another big day for Apple. During their “Special Event” (The first-ever event at the Steve Jobs Theater) we could see new Apple Watches, TV and iPhones, including the most advanced and desirable – iPhone X.
Among all new awesomeness: edge-to-edge screen, Neural Engine (piece of hardware dedicated for Machine Learning computations) or Animoji, there was something that can be the bigger revolution than we could think.
Thanks to Face Id we will be able to unlock our device just with our face. Face recognition will replace Touch ID, fingerprint based authentication method – also for Apple Pay. Really complex hardware powered by machine learning solutions will keep our device and our money safe.
Will it work and be reliable enough? We’ll see in next months.
But there is something else worth noting. For the first time, secure device won’t ask us for authentication. There will be no instruction to place your finger or provide your pin/password. Instead, in most cases, iPhone will do it for us, automatically.
Such a small UX thing, but such a big change. You will need to do absolutely nothing and still, you will feel safe and secure.
Of course, probably there will be hacks and imperfections. But the first step toward password-free future was already made. And again, nature did a job for us, making each person unique.
As a technical people, we usually see AI solutions as a bunch of really smart algorithms operating on statistical models, doing nonlinear computations. In general something extremely abstract, what its roots in programming languages.
But, as “neural network” term may suggest, many of those solutions are inspired by biology, primarily biological brain.
Some time ago, DeepMind researchers published paper: Neuroscience-Inspired Artificial Intelligence, where they highlighted some AI techniques which directly or indirectly come from neuroscience. I will try to sum it up, but if you would like to read full version, it can be found under this link:
One of many definitions describes AI as hypothetical intelligence, created not by nature but artificially, in the engineering process. One of the goals of it is to create human-level, General Artificial Intelligence. Many people argue if such an intelligence is even possible, but there is one thing which proves it: it’s a human brain.
It seems natural that neuroscience is used as a guide or an inspiration for new types of architectures and algorithms. Biological computation very often works better than mathematical and logic-based methods, especially when it comes to cognitive functions.
Moreover, if current, still far-from-ideal AI techniques can be found as a core of brain functioning, it’s pretty likely that in some time in the future engineering effort pays off.
At the end, neuroscience can be also a good validation for existing AI solutions.