This is heavy, Doc!
Over the past few decades, technology has progressed at unprecedented rates in directions we never imagined, with rules we previously did not follow. Views have changed, cultures are intertwined and we as a collective society must now find a way to develop the solutions of today for the problems of tomorrow.
This new paradigm is greatly influenced by the revolution brought about by the internet. Information being readily available and a couple of keyboard strokes away meant a selection of previously unexplored horizons and with it came brand new avenues for common citizens, businesses of various sizes and professionals to earn, learn and move outside of convention. For example: Instant messaging has allowed us to converse in real time with individuals from halfway across the globe, creating a boom for businesses through marketing channels, stronger customer relationships and visibility. Widely considered by many as the ‘great equalizer’ in terms of knowledge, the internet has provided us with the opportunity to gain access to vast amounts of content, some proving to have a huge impact on our approach to various aspects of life.
Today, we are witnessing the giants of the technology industry during the internet era slowly losing their grip on the global market. The titans of tomorrow, not far behind, stand on their shoulders, ready to take over when the time comes. In this so-called ‘Internet Age’, we are able to observe a certain trend with an eerily common denominator among them. Let’s take a look at the three generations of the web so far:
- Web 1.0 — Yahoo was the world’s top search engine, Google was right behind, Amazon was an online book store, eBay was the largest online buy and sell platform. (1994–2001)
- Web 2.0 — Friendster and MySpace had their share on top of the social media stratosphere, Multiply dominated Asia, Facebook was a secondary choice for a communication platform, LinkedIn was introduced and Groupon made its debut. Digg, Reddit and StumbleUpon were the pillars that showed how valuable content generation was to improving communities overall. (2002–2009)
- Web 3.0 — The Mobile Age. Facebook survived and adapted to this new wave, Instagram carved its way through the market, Snapchat and Vine disrupted the influencer space (and eventually fell). (2010–2017)
It then seems reasonable to conclude that, as a company in the internet space, their viability would be highly dependent on when they were conceptualized. With each new paradigm shift, the previous generations increasingly become out of touch and inching closer to their decline, unable to wrap their heads around the subtle changes that the newer generations bring. Thus, companies with good strategy, clear vision and a strong sense of direction coupled with flexibility are catching up and will overtake them sooner or later. Google was determined to sell their remaining shares off to Yahoo for $1,000,000 at one point. In a bizarre turn of events, Yahoo had become a shell of its former self and is now the company on the short end of the stick, in no position to bargain. Rupert Murdoch’s now classic statement perfectly encapsulates this predicament; Big will not beat small anymore, it will be the fast beating the slow.
The Diffusion of Innovations Theory
Published in 1962 by Professor Everett Rogers, the book entitled ‘The Diffusion of Innovations details the process by which an innovation creeps into the market over a specific period of time. This remains one of the most relevant theories to this very day as it explains how, why and at what rate technology spreads through cultures.
As with any technological venture, it’s important to take note of this time frame. In order for them to stay competitive in their respective markets, companies have turned to reorganization and modification of its processes in order to satisfy their requirements for resource improvement and cost-cutting.
Cryptocurrencies are the primary example that we’re going to be using for the Diffusion of Innovations theory. Powered by a revolutionary technology poised to take over the internet age, Blockchain or Distributed Ledger Technology offers a diverse set of opportunities for individuals — currently and in the near future, the same way open source software disrupted IT two and a half decades ago. Transactions may now be audited and with the use of a programmable code block called ‘Smart Contracts’, the information may either be set to public (accessible to anyone in the world) or permissioned (only accessible through a private key). This creates a new paradigm for the way that information is shared, how businesses interact in a variety of industries, everything from financial technology, to mobile payments, logistics and even for tracking donations sent through charity.
In many ways, we can consider the blockchain/distributed ledger technology era an iteration of the new, decentralized internet with the vision of creating a trustless economy void or greatly decreased of any form of transactional fraud as compared to what our scenario is today.
A wide range of applications have been built, are being built and are in use in just the span of a year. These projects are funded by Initial Coin Offerings (ICO’s) and the number of such has increased over the past year to the point that it has slowly become saturated. The total amount of money placed in ICO’s in the first four months of 2018 alone (~$6.3 Billion) already surpasses that of what was raised in 2017 in its entirety. It is estimated that there are 100 new projects that open their fundraising rounds every month, although we should see a steady decline once regulations start to kick in.
With that being said, has the craze truly died down? The numbers say otherwise. In May of 2017, a study in Cambridge University suggested that there are about 3 Million people that own cryptocurrencies. Today, in 2018, that number is projected to have at least grown to anywhere between 30 Million to 50 Million people in the last few months, as 3 in every 4 people according to a survey in the US are at aware at the very least of what cryptocurrencies are. In order for blockchain and cryptocurrencies to progress further, however, we have to look beyond the profits it brings and more on the long term benefits that the technology brings. For the space to grow and mature, there needs to be actual adoption of these tokens for their intended use cases. Awareness is no longer enough to drive up the prices.
If we are to follow the curve in the Diffusion of Innovations, that equates to roughly 14.8% of the world’s population. This means that blockchain technology is barely scratching the surface at the early majority phase, wherein adoption takes significantly longer than its two predecessor markets, the innovators and the early adopters. During this period, an influx of household and institutional capital starts to come in and we can expect more and more in the next few months until we reach the point of critical mass. To give you context, the collective market capitalization of cryptocurrencies ballooned from 18 billion dollars to ~220 billion in the span of a year from 2017–2018 at the time of writing.
Taking this into account, we now then gain the ability (at least on the surface level) to see through current trends in the world today, how technology makes its way into the general market and be able to make more confident decisions based on which phase they’re in according to the diffusion scale.
Technology is Evolving At An Exponential Rate
Peter Diamandis explains the 6D’s of Exponential Growth.
Now we can turn our attention to what Peter Diamandis likes to call the 6D’s of exponential growth — digitization, deception, disruption, dematerialisation, demonetisation and democratisation. These are the six phases that any product or idea undergoes on their way to making massive cultural and societal impact. It’s extremely vital to consider that with this concept comes a matching philosophy: Humans are linear, Technology is exponential. The premise is simple: Technology does not evolve in a linear fashion as we humans do, but instead snowballs and escalates at an exponential rate, meaning that one day, at the rate we’re going, we will be outpaced by our own inventions and innovations.
The 6D’s of Exponential Growth: An Example from the ThinkBusinessBlog
Experts all over the world predict that humans are fast approaching the technological ‘singularity’, an almost dystopic concept of a society being overtaken by artificially intelligent machines and devices that are far more capable than we will ever be. Within the short timeframe of a decade, the singularity has transformed from a promising science fiction movie pitch into a legitimate scientific debacle. Suddenly, dystopic novels started to make more and more sense. If you had told anyone that a robot a-la Andrew from Bicentennial Man would legally be considered a human being, possessing — or even surpassing the intellect of such back when the movie came out, you would probably be categorized as part of the many different conspiracy theorists lurking in forums in the early days of the internet.
However, the narrative is now entirely different. Theoretically possible but extremely difficult to pull off, human-level AI (one which could pass the Turing Test) could be the catalyst to transition from the reign of humans and to superintelligent machines that might altogether ignore Asimov’s Three Laws of Robotics. For the first time ever, we are considering putting rights, responsibility, identity and machines together — in the same sentence and in the same context. When that exact moment will arrive — we don’t know for sure. (Ray Kurzweil says it can happen as early as 2045.)
This of course, raises ethical questions leading to the arrival of the technological singularity. Are machines capable of learning true human empathy? Can they actually exercise just and fair moral judgement? Can they fully comprehend risks? Are they entitled to the same rights us naturally produced, organic living beings enjoy? At the rate we’re going, the singularity may very well arrive before we could come up with defined answers for these questions lest we prepare for it.
How exactly can we best ready ourselves aside from knowing that this is a distinct possibility? This leads to my next point.
VUCA in Technology and Innovation
VUCA was an acronym coined in 1987 which describes four different aspects of different circumstances and environments — Volatility, Uncertainty, Complexity and Ambiguity. Up until 2002, this term was generally used in reference to the direct effects of the end of the Cold War on the world as perceived. The concept has since then taken shape in the context of businesses, organizations, strategic leadership and education.
Unpredictable events happening outside an organization can be negative or positive, but either present greater VUCA, which makes it more difficult for leaders to make decisions. An example of positive complexity is a product going viral and becoming an internet sensation. An example of negative complexity is how the hopelessness of a Tunisian fruit vendor led to Brexit.
- Excerpt from Forbes : How VUCA Is Reshaping The Business Environment, And What It Means For Innovation
This model is very useful as a tool for mapping out the future, as it sets the tone for the type of mentality that we should possess in preparation for the next wave of technology and new industries of tomorrow. The following descriptions are in the context of tech and innovation:
Volatility — The Nature and Dynamics of Change
It’s 2018. While many may disagree, Print magazines are a thing of the past now and children of this generation view them as jurassic artifacts. I know, there are vast differences between print and digital such as people not reading content the way they were intended to be, magazines allowing us to curate our own experience and print giving the impression of a more ‘legitimate’ function, and we can talk all day about it — but that’s not the point.
Volatility in a technology context requires us to be more proactive in seeking out various market disruptions and instead of trying to lead the narrative, be able to adapt to it. This means that in this day and age where instantly accessible information has made people much smarter (or dumber — depending on who you’re talking to), the only way for people to prosper is to accept that things change more often than they ever have due to the exponential growth of technology. With that being said, it’s integral that we keep an eye out for current trends, emerging trends and even future trends.
To put it simply, we must strike the right balance between being critical and receptive of what the future brings for us, be it good or bad. It’s a roller coaster ride out there.
Uncertainty — The Unpredictable Future Looming
With volatility comes a lot of uncertainty. While you and your team may be stuck on the idea of building the next big platform or invention, at any moment, your competition could swoop in, launch one of their own and put your plans to a halt — or worse, a dead end. In this increasingly unstable world, the strategy for navigation is up for debate. Consumer markets are ever changing, the shift of power is rapid and people in executive positions may be toppled if they don’t stay on their toes.
Perhaps this is best exemplified by the demise of mobile phone giant Nokia, whose reign at the top ended abruptly with the arrival of smartphones ushered in by Apple’s very first iPhone model. That wasn’t all to it, as they actually had a fighting chance early in the game before they completely fell off. This was a fatal error in their judgement. In my opinion, these factors were the major contributors to their downfall:
- Their stature as the market leader made them too complacent.
- They failed to recognize that there were disruptions on the horizon
- Because of that, they lost touch with their consumer base, as their expectations of mobile phones had drastically changed at that point
- They weren’t able to adapt fast enough, and were fixated on establishing their own platform with Microsoft (See: Nokia Lumia and how that turned out for them).
- They ported over to Android way too late, and their brand’s pulling power had significantly diminished by that time.
The iPhone was branded as a product with superior capabilities at a premium price. It disrupted Nokia and effectively killed off its once-iconic offering.
How then do we counteract the uncertainty aspect of things? We need to know the future to compete in the present. Flexible systems, good planning and preparing ample resources for any unexpected outcomes are the keys to thriving in this dynamic, ever-changing environment. Treading the waters carefully, recognizing which disruptions we should take head on or just ride along with and finding solutions to problems that don’t exist yet are factors that are just as important. Will we adapt or will we perish?
Complexity — The Confusion that Surrounds Organization
Technology as we know it is getting more and more intricate in design by the minute. Even traditional domains such as logistics, finance and education are being disrupted by new models constantly. Same-day, on-demand couriers are now accessible with the press of a button, loans can be applied for using an analytics algorithm, and classrooms are utilizing Virtual and Augmented Reality to enhance the learning experience. While they may seem simple enough on the outside, the process involved in developing these functionalities is incredibly intricate. In an interconnected world, simple, straightforward solutions no longer work the way they used to. We now have to take into consideration a number of outside factors such as environmental costs, legal compliance and social impact.
On different levels, it’s no longer sufficient to have just one domain expertise. We now have to prepare to work in cross-functional environments and understand that beyond the technology, there are other aspects that can be attributed to the success of a company. We simply cannot develop a waterproof mobile phone without understanding the laws of physics, the business economics that goes into producing the materials needed and the user experience that the consumers seek. This is now the new paradigm that has become more apparent over the years: Our success is dependent on our knowledge of each other’s domain (holistic understanding) and synergizing to come up with the optimal output for any given assignment.
We now need to cultivate an environment to nurture specialists for sub domains in order to hasten the process in navigating through complex systems and building up resources. Furthermore, top level management must be competent enough to understand and appreciate the interdependence of variables on a high level. This allows them to adopt non-linear approaches to solving various problems — thinking outside of the box. Conversely, this also means that multilayered dilemmas brought on due to too much complexity must be solved with relative ease, sort of like using algorithms in programming.
Ambiguity — The Haziness Of Reality
What holds true today may not necessarily apply tomorrow. No one can predict the future and in the field of tech, that implies that products can’t be launched based on expected disruptions. No plan survives disruption by the competitor, therefore there is a need for the ability to make sense of things under ambiguous macro environments .
“The key to getting ahead of the issues is knowing your organization. Knowing exactly where your people are at any point in time is essential to mobilising in the event of a disaster; and knowing who they are and what they can do, enables the business to take advantage of new economic capabilities. Above all, knowledge of the programme as a whole helps to expect the unexpected, as well as uncover significant cost-savings and transformational opportunities.” — Vicki Marsh
With so much ambiguity, we need to figure out how to we can do proper risk aversion. A way to do this is to combine our holistic overview of things with our deeper understanding of subtopics in order to find solutions that resonate with the community, not just customers in general. The pace at which it changes is definitely challenging and this helps us prepare to fail and to face previously unprecedented realities in the industry.
Digitalization, Integration and Disruption
“Kodak and Blockbuster weren’t caught by surprise, they knew what the future looked like. They didn’t know later than everybody else, they knew ahead of everybody else.” They knew; but they were unable to put together the right response.” — Joshua Gan
It’s clear that individual steps are not mastered by everyone within a controlled setup. A carpenter who uses nails would not need to know nor care about all the steps taken to create a finished product, as they only have one function. In many ways, computer-based systems are the same as they are a part of the process that is undergone in order to complete a product, per se. You and I, the consumers, are fine with not knowing what happens behind the scenes, just like how the carpenter doesn’t know how the nails are manufactured.
On the other hand, the moment that the furnace producing the nails goes down, the carpenter can’t work. Today, if a server goes down, an airline can’t fly — the main difference is that flights are a service and cannot be stocked up/hoarded like nails, so it would have immediate consequences. If a furnace broke down, it wouldn’t immediately affect the carpenter until he’s used his entire supply of nails up and the store is sold out as well. This all happens at a pace that might permit the carpenter to source out nails from a different supplier before they’re completely exhausted. Digitized factories with automated inventory management would then come into play, communicating with a database API that various nail distributors could use to display their availability on an online platform.
Picture this — just two decades ago, we had to walk up to a counter to present print-out tickets to a clerk at the airport. They would have to register you on their database and manually notify the flight that you’ve checked in. Today, we can walk into an airport and seek out a machine. All we have to do is scan our ticket QR codes on our phones and it would take just a couple of seconds to get your boarding pass, a receipt and a tag for your luggage.
Even more interesting is what happens on the back end within those few seconds. Once you scan your code, you’ve essentially triggered a communication chain entirely made up of machines. Once you’re confirmed to be a passenger, computers automatically check your flight’s status, your travel history, your criminal records and whatnot. They check your seats of choice, your frequent flier miles and your access level to various areas within the vicinity such as VIP lounges and connecting flight waiting zones. This invisible conversation occurs among multiple servers talking to a mesh network, reaching out to satellites that are also conversing with machines (maybe in Malta, where you’re headed), checking with foreign immigration and so on. To ensure that the weight distribution is optimal, these machines also start to adjust seating and the passenger count depending on if the plane’s fuselage is loaded more in the front or the back.
Take a step back and see what digitization has done for us. Menial tasks that would normally take a certain number of human labor to perform have been rendered obsolete by machines that are a one-time investment, makes less mistakes and takes a significantly shorter amount of time to perform certain assignments.
Toys R’ Us, once the world’ biggest and most renowned retail store for children has filed bankruptcy, with most of the items in their inventory being rendered obsolete by the emergence of what we like to call smartphones. As much as we feel for our favorite toy store, it made sense. Why would you repeatedly spend money on single-function toys that lose their novelty within a week or two when you can just consume content on a mobile phone for much less, with a one-time investment?
The way we design is now radically different. Yes, toys still exist, but the ones that would have any chance of selling like hotcakes now revolve around interoperability — as in gadgets that can interact with other toys, teddy bears that can connect to the internet to download new voice packs and actions, smartphone controlled cars with added functionalities and so on. My point is, the digital era has been so influential on a number of industries and now they have no choice but to bend over or lose relevance altogether. Change has gone from being gradual to sudden altogether.
Automation, Augmentation, and Autonomy
Thanks to recent innovations, the technology industry has definitely made lives more convenient. We’re able to remain in contact with our loved ones constantly and even reconnect with people we haven’t seen for a long time. We’ve been a part of significant cultural events, both past and present (Think: The Royal Wedding, the Malaysian Elections, the Beatles’ last concert at the Apple Rooftop, etc.) and we’ve had exchanges with people we otherwise would never be able to meet in a world void of connectivity. We get the experience that others get on a digital platform and that forces businesses to adapt.
In an industrial context, even production factories are digitized. Sensors and connecting mediums allow machines to communicate with each other. For example, if robot A senses an error in calibration or a malfunctioning part, it can automatically tell robot B that it needs to halt its operations in order for maintenance to be performed. To some extent, human assistance is still needed on this level for repairs, but that’s not to say that it hasn’t made life easier for them. In the past, when a machine would malfunction, the entire line would be stopped and maintenance would have to troubleshoot to check what the root of the problem is. Now, sensors installed can just as easily inform them of what needs to be fixed, which parts of the factory can operate perfectly without having to be shut down and so on. Life is much easier. No, we don’t lose jobs, they just change in nature over time. We just have to adapt as they do.
As we begin to explore the fourth industrial revolution, we must shift our focus on addressing three main questions:
- Since we depend on technology, we might someday lose manual skills for certain tasks. What could go wrong? (Think autonomous vehicles and losing the skill for manual driving)
- Who are we giving the access and authority to embed critical knowledge in software to?
- How do we ensure that processes being done behind the scenes are safe and reliable, enough to protect them from hackers, fraud, terrorists or even natural disasters?
Let’s face the facts here: We’re becoming increasingly dependent on technology. But with drastically improved productivity and quality of life, we need to consider how, where and to whom we’re giving control and governance to. There’s much work to be done and there’s a lot left to discuss when it comes to adopting even decades-old innovations, and the risks need to be brought to light as well. Are we slowly losing control over our lives? That’s something to keep in mind.
The Culture of Open Innovation
Aside from the short video above (which I recommend you watch, a lot can be said about this concept. Perhaps it would be best explained by using one of the world’s most recognizable brands as an example. Samsung is one of the leading enterprises right now in the digital device space. From smart home appliances to their flagship smart phones with superior features, these devices have made it to the market thanks in large part to their culture of open innovation.
This company takes pride in their culture and they constantly collaborate with various entities to achieve the best possible results. Samsung has participated in various global consortiums to assist in achieving general consensus on important aspects of technology, including but not limited to creating beneficial ecosystems, producing standards for manufacturing and creating guidelines for design. This allows them to position themselves to lead the push in developing next generation infrastructures and cutting edge technology.
By creating strategic alliances between the industry and top-level universities in various countries, the company is able to develop a robust network of emerging technologies, next generation infrastructure and world class personnel. They collaborate by promoting research and sponsoring training for students and employees all over the world. This is an example of a big enterprise ‘keeping their eyes peeled’ for the next wave of innovation coming within the next few years. With multiple research centers spread across the continents, they are able to gather unique insights from different markets and consolidate them with their internal R&D team to create new products in different fields — namely hardware, software and packaging.
Furthermore, Samsung frequently ventures into working with other companies. They have a ventures arm which invests into early stage startups in order to bring in revenue if they manage to exit and at the same time provide access to new technologies that they can learn and benefit from. Examples of companies they’ve invested in are Yellowbrick Data, BitFusion.io and Audioburst. In line with this, they run accelerators in order to cultivate an empowering environment for entrepreneurs, some of which would include an initial investment, facilities and access to their vast amount of resources. Their philosophy behind doing so is that they believe that products coming from their internal arm would be integrated into their product portfolio overtime or otherwise provide excellent insights moving forward. To put the icing on the cake, they actively seek out and acquire startups that are aligned with their vision and strategic direction. More often than not, these companies remain independent and are given access to various resources.
So why do large enterprise corporations collaborate with smaller ones? There are a few main reasons:
- They spend less money on R&D and bring about ideas from outside their own bubble which they are aware of
- They save time by allowing companies to work and flourish on their own as long as it can benefit them as well in the future
- Their products get to utilize and integrate the designs made by these startups who remain independent but fueled with the resources of bigger companies.
Big companies can benefit from the huge pool of innovations that smaller groups have made instead of exhausting their resources on replicating them or trying to find the next big thing that will revolutionize their industry. Open innovation allows them the freedom and flexibility to provide credentials to work with their resources to these startups and integrate them into their own products, therefore producing value on both ends.
Open Source In Hardware, DIY And Its Influence On The World
Not to be confused with Open Innovation where the focus is on companies implementing such culture in their practices, Open Source instead delves deep into engaging the community around a certain product in order to improve it and to produce applications that even the original creators did not foresee. The Open Source movement began in the ’80s in the United States in reaction to the arrival of software vendors that emerged the decade prior. These independent developers created software that weren’t specific to certain hardware manufacturers and sold them through licenses. This idea soon evolved into the concept of creating ‘free’ software which encourages other developers to build on top of the platform with code that was readily available to tweak and modify. Today, we see the effects of this movement in our mobile phones, as Android operating systems running on smartphones are Linux-based.
Fast forward to a couple of decades later and the influence of open source now reaches the hardware space. Every modern engineer’s handy tool of choice, the Arduino is a commercially available and affordable microcontroller based on the ATmega series. In layman’s terms, it’s a programmable board that you can connect to various electronic components for almost any application you could think of. It’s been used in homes, in factories, in robotics now it’s being championed as a learning tool in the academe.
The Arduino is fully open source, meaning you can replicate, modify, redesign and reinvent the board’s core specifications to your liking. A slight change — such as replacing a component in favor of another one on the board would entitle you to change its title to something derivative of the Arduino base (Pinguino, Aceduino, Seeduino, etc.). This ushered in a new era of Do-It-Yourself (DIY) with popular sites like Instructables and Hackster being prominent platforms used to showcase designs for certain projects and sometimes even step-by-step instructions to go with the original source code for individuals to recreate from the comfort of their homes.
This new era effectively shook up the technology and innovation landscape. Suddenly, the power was placed back in peoples’ hands as they’re essentially able to reproduce certain functionalities of commercial devices at a fraction of the cost which almost anyone could build with the most rudimentary of electronics knowledge. Companies now have to produce cheaper, more refined products to cater to the market and even then it doesn’t completely take away the shimmering appeal of creating your own solutions to everyday problems.
While the Arduino’s appeal continues to go beyond that of makers and into the mainstream market, for startups and inventors, its best feature as an open source product is the amount of support one can receive through its online community in various channels worldwide. Used as a learning tool, a stepping stone and as a prototyping device, it’s become an integral part of product design in the hardware space to create proof-of-concept projects for validation right before going into mass production. Its open documentation allowed other manufacturers to design add-on boards known as ‘shields’ stacked on top through its female I/O (input/output) pins that were able to give the board more functionalities such as motor shields, sensor shields and connectivity shields like XBee which allowed it to communicate via Radio Frequency or a WiFi shield which grants it internet access capabilities.
The maker culture continues to proliferate and along with the rise of the single-function microcontrollers came the credit card sized Raspberry Pi. For those not familiar with this device, believe it or not, it’s a full-blown computer or otherwise referred to as a microcomputer (due to its size) capable of running various operating systems like Raspbian or even Windows. Unlike the Arduino, the Raspberry Pi was able to handle multiple programmed functionalities at a time just as a normal computer setup would work and relied on GPIO (General Purpose Input/Output) pins to connect to various electronic components. With add-on boards like a Camera module or the SENSE Hat, the Raspberry Pi created a following of its own and yet again added a new dimension to the already vibrant landscape.
The Internet-of-Things: The Modern Day Industrial Revolution
This brings us to one of my favorite modern applications: The Internet-of-Things, or IoT for short. Over time, various open source boards naturally began to add the capability of connecting devices to the internet in order to communicate with each other. Previously known as M2M or machine-to-machine, IoT tech has taken the world by storm with its applications made by people from all over the world. From simple motion detecting alarms to sensing the pH levels in rivers for research, this technology has benefited the most from open source and has played a pivotal role in the trend for startups as of late.
For those not familiar, there are four distinct characteristics of IoT:
- Ambient Intelligence
- Flexible/Adaptable Structure
- Event-Driven Triggers
- Complex Access Technologies
Current system standards generally possess ambient intelligence with unobtrusive hardware, dynamic, distributed networks, seamless/ubiquitous communication lines, adaptive software and sensor technology. Products are used for two main purposes: For collecting large amounts of data and for creating event-based triggers based on sensor feedback. We accrue data in order to manually analyze them or to aggregate them on an analytics platform in order to find problem areas and optimize actions based on those findings.
“The philosophy of IoT is to create a world in which millions of devices are able to communicate with each other in the best and widest possible way, without technical or commercial limitations, in order to make our lives a little better.”
There are various obstacles to development, but perhaps none more important to consider than the Interoperability of sensors, software and connectivity mediums. With so much potential to be the forerunners in so many areas, its own diversity could be a hindrance to its growth. Presently, there are already hundreds of thousands of devices of different types manufactured by different brands and each possessing their own standards. Thus, enabling these devices to communicate with each other is not only a major roadblock from a technical perspective, but also a matter of gaining consensus.
Interoperability basically makes the difference in designing connected systems. If two IoT devices were unable to communicate, it would be because they utilized different protocols and that means that we would need to purchase or create two different controlling devices with two separate chunks of code to communicate with these devices. That doesn’t bode well for scalability. Another problem with different controlling mechanisms would mean that devices would not be able to coordinate with each other. Just imagine being in an autonomous vehicle and it begins to malfunction. Because it can’t communicate with other vehicles on the road, ultimately it puts our lives at risk. A good example of a semi-interoperable device would be the Amazon Echo. It’s good, but not nearly enough. The Echo device can integrate with various smart device brands like lights and adapters, but doesn’t cater to other manufacturers. This is the pressing challenge moving forward. We must develop systems or parts of systems that are able to communicate with each other seamlessly through integrated middleware regardless of their manufacturers or technical specifications. This puzzle must be solved in order to allow for its expansion and mass adoption within the next few years.
Technology has helped businesses grow and operate on a local, regional and global scale in many different forms. Digital workplace productivity tools like Slack, Trello, IFTTT and many more have surfaced over past couple of years which enabled teams to coordinate with each other no matter where they are in the world as long as they have internet access.
The dynamics of business have fundamentally changed because of the Cloud. Gone are the days of costly server room setups for enterprise corporations. Startups can just as easily afford the same capacity for a much, much lower price range and on a much more secure and credible host, such as Google Cloud, Amazon Web Services (AWS) and Microsoft Azure, to name a few.
Utilizing the cloud makes collaboration a simple task. It’s as easy as accessing the drive linked to the cloud server and dragging/dropping files on the go — from Japan to Russia, Berlin to Singapore, Maldives to Nigeria and pretty much anywhere else you could think of where there’s internet. This has opened up a wide range of possibilities. If say you wanted to work on a freelance development project, essentially you could hire a project manager from Italy, QA engineers from India, front and back end developers from Taiwan. The entire process takes only as long as uploading and downloading the needed files and you could collaborate on the go. Features like adding comments and access restriction (view only/editable) make it much easier to manage a project without having to ever be physically present in one room. Now I know that VoIP isn’t new, but it’s worth mentioning anyway. Coordination via tools like Skype, Zoom or Office365 is a piece of cake with enough bandwidth to support your videoconferencing needs.
Nowadays, it’s worth noting that with the rise of the SaaS (Software-as-a-Service) model to its current tenure of prominence, it’s no longer sensible to repackage software and turn it into a new product when you could just as easily provide downloadable updates and contents over the air (OTA). The same is being done in the hardware context. Devices like the Amazon Echo or the Google Home update themselves automatically whenever available. Other IoT enabling devices have since then followed suit. Of course, there are considerations taken in when upgrading— such as if the hardware is outdated or obsolete or if there are newer, more efficient materials in the design that completely blows its predecessor out of the water. This is something that Apple has struggled with in recent years with their customers complaining that they keep churning out the same phone with only incremental upgrades to the specifications that don’t justify the exorbitant prices anymore.
“There’s an app for that.”
A promotional advertisement for the iPhone 3G in 2009.
As a tech company, your biggest goal is for your brand, product or service to be used as a gold standard of comparison for just about anything. We almost always hear the phrase “Uber for _______” and while I don’t agree that people should be patterning their model after this, you can’t deny that it makes it easy for everyone else to comprehend.
The essential features of modern mobile phones now cater to almost any functionality you can think of. They can function as flashlights, as notebooks, as voice recorders, as music stations, as gaming devices, as televisions, as cameras and more. Being in a country where 90% of the population doesn’t speak English isn’t as scary as it was maybe 5–10 years ago, because yes, you guessed it right, there’s an app for that.
The challenge for both startups and traditional businesses now is for them to be able to grow exponentially fast or risk losing out to younger, newer, and hungrier teams ready to take the throne by force at any time. Creative business models that essentially offer a version of your product that’s much cheaper (or even free) and more efficient could rock your world and you’d be in a lot of trouble if you can’t respond swiftly. How then do these companies adapt by using technology?
There’s one such organization I work with with an approach that I’m particularly interested in. Acudeen Technologies is an example of a company with a very good strategy being implemented to scale fast and capitalize on their initial success. From a team of three, they’ve grown to over 50 employees across three different countries in just under two years. Founded in the Philippines and now based in Singapore, Acudeen is a platform for SMEs to upload their invoices or receivables on the marketplace and auction them off at a discounted rate to interested parties in exchange for faster liquidity (takes 3–7 days instead of 60-90).
With some problems surfacing such as fraudulent invoices being uploaded, a cumbersome disbursement process and lack of transparency between the funders and the sellers, it would have been difficult to scale up operations to other countries within the region without addressing them in a local context before anything else. The arrival of mainstream adoption of Blockchain Technology has made it much, much easier. Current and past transactions for each user are made transparent through distributed ledgers, while the IBM Hyperledger Fabric on which Acudeen’s AssetChain is built on protects sensitive information on a private blockchain which could only be accessed by a special key that the users themselves can provide to potential funders to review and validate before proceeding.
Doing so allows Acudeen to then facilitate cross-border transactions, as in a receivable in Jakarta being purchased by someone from Tokyo and vice versa using the ACU Tokens. Blockchain also makes it easy to spot suspicious activity within the platform and penalize these individuals while users that display integrity, loyalty to the platform and consistency are incentivized. Expansion then begins to make sense as it’s now faster, cheaper and significantly easier to facilitate transactions with tokenized assets across the world. Technology harnessed in this specific case has removed the barriers predetermined by our geographical locations and restrictions. More information can be found in this writeup.
The Convergence Of Technologies — Understanding How It All Comes Together
So you’ve probably heard it a million times by now. At some point, all of these new trends and innovations will come to a point of convergence to create a massive upgrade and change in lifestyle by way of Smart Cities emerging, one country after another.
How feasible is it really to believe that we’d live long enough to experience a futuristic environment akin to Disneyland’s Tomorrowland? We can expect at least some of its aspects to become a reality pretty soon, some are even being implemented already as we speak.
At one point, all these new trends you keep hearing about (some of which you’ve probably tried out first hand): AR/VR, Cloud Computing, AI/Machine Learning/Deep Learning, Robotics, IoT, Blockchain, Brain-Computer Interface and pretty much everything else — will become a fundamental part of society, just as industrial machines, computers and the internet completely changed our paradigm several years ago. Will it signal the arrival of the technological Singularity? That might be up for debate. What we’re sure of is that once the stars align and the time is right, we’ll be living in a world which would rely heavily on these fields and will once again force the giants like Google, Alibaba, Amazon, Facebook and all the others to adapt or perish.
There are varying opinions on certain topics like how we should prepare to build the future and which technologies will thrive. Some are on the edge about letting them take over our day-to-day tasks, some are forward-looking and are more receptive to this change. No matter which side of the fence you’re on, the first thing you want to do is to identify what you believe are the technologies that your industry will revolve around. If you were an engineer then you’d definitely want to look at the combination of AI, IoT and Robotics. If you were in the fashion industry then you might want to utilize AI to give brand recommendations based on user data, and Augmented Reality to show your consumers what you’d look like with a certain article of clothing on before you decide to purchase it. The possibilities are quite frankly endless.
My own vision of the future involves three fundamental technologies: IoT, AI and Blockchain. These three, I believe, are the key to creating smarter, interconnected cities. Allow me to elaborate on a scenario in which they would effectively come together and how this framework would be utilized for many years to come.
Internet-of-Things based smart devices would have solved the issue of interoperability and connectivity. They are now able to communicate with each other regardless of the specifications or the manufacturer through some medium, depending on various circumstances. These devices then effectively become ‘nodes’ , remote devices that are able to collect data in real-time and act upon certain situations if needed through pre-programmed event based triggers. These nodes are used all throughout the city for many different purposes.
Artificial Intelligence-based algorithms either via supervised or unsupervised learning then come into play. As massive amounts of data is aggregated, AI does its magic. It performs advanced analytics on certain metrics, creates accurate recommendations based on historical data paired with new data that comes in at set frequencies and that effecitvely creates a brand new layer to the decision making process. This possibly affects all levels of society, from households to large corporations and even governments. An example of large data sets being utilized for our benefit, although soon-to-be a primitve one, would be GPS, which could give us an estimate of when we’ll arrive, tell us which routes we should take and so on. Imagine an application like that on a larger scale when phones and cars are not the only mediums that provide data.
Now we have an efficient medium to collect data with IoT devices, a platform/method based on AI that gives us a deeper insight into various problems using the data collected and provides recommendations on the courses of action that we could take, the next logical step would be to ensure that they’re acted upon in a secure, legitimate manner. Enter Blockchain Technology. Data collected by IoT devices can be securely logged on permissioned blockchains and accessible only by a private key provided to the right people.
Imagine a city in which machines are self-sustaining and revenue generating. They can start transacting with humans via buying/selling off assets and services by way of automating transactions through what we call Smart Contracts, another interesting aspect of Blockchain Technology. Basically, if I run out of battery for my electric car in the middle of the road and there are charging stations available, I can scan my blockchain-based national ID and the money would be charged directly to my (you guessed it right) cryptocurrency wallet. A smart contract could automatically check if I have sufficient balance to do so and if not, it can be credited as a loan to be paid out the next time any form of compensation gets sent to my wallet.
Thus, the city now has an easy way of collecting data, making efficient decisions, expediting processes and keeping tabs on its citizens. There might not even be a need for an actual human government anymore! In the next couple of years, we will see our dependence on technology skyrocket — for good or for bad.
What Can We Do To Prepare?
We could have gone on further in discussing what the future would look like in terms of technology how they create an impact on our daily lives, but I won’t go beyond what we can foresee with our current available resources. This picture of the future of course raises some ethical questions as machines and technology gradually take over. Are we actually going to succumb to our artificial counterparts? Maybe, maybe not. That’s an important factor to consider in the not-so-distant future. There are three different perspectives we could use to see how we can better react to technology and prevent a Skynet outbreak, instead focusing on the development of society for the betterment of mankind.
As a Citizen
- We should gauge how advanced our environment is compared to the rest of the world. Are we the forerunners? Are we 10–20 years behind in adopting the latest methodologies and products?
- If we’re the most advanced in terms of technology, how are our governments reacting to these changes? What regulations and standards are in place to protect the interests of the general public?
- If we’re lagging behind, what can we learn from the countries who are way ahead of us? What went wrong, how did they resolve these problems and what should we take note of so we don’t make the same mistakes?
As a Company
- Are newer, more complex technologies actually going to see the light of day in our market’s context? Or are there more pressing needs that need to be resolved with much simpler solutions instead?
- How much bureaucracy is present in our industry? How does this affect how fast we can progress towards our goals and eventually scale up? Are they helpful or counterproductive?
- Will my current business model be disrupted should these new technologies be widely adopted within the next few years? Will my product/service be rendered obsolete if so?
- Is there a way for me to future proof my business around these new technologies while still fundamentally maintaining the essence of the company?
As a Developer
- If I have domain expertise on a certain programming platform or protocol, are there any looming competitors, languages I should learn or alternatives that I should be aware of? Do these pose a threat to the existence of the platform I’m currently well-versed with?
- Is the knowledge that I gained 2 years ago still relevant and applicable today, or is it outdated? Am I diversifying myself enough not to stick to just one methodology, but many different ones?
- Are the products I’m developing able to withstand the test of time? Am I utilizing development tools that have strong communities behind them and widespread adoption in terms of customers who get ahold of the final product?
If you made it this far, congrats. That was quite a long read. We’ve touched up on a number of concepts, tools, theories, topics and personal insights, all of which would help you get started in navigating your way through the future. Whatever your industry may be, it’s imperative that you keep yourself up to date on the news and the progress of technology. I’m not claiming that these things will happen but one day, Doctors may be rendered obsolete by Quantum Computing and Robots, Lawyers may be replaced in favor of AI and so on. When this article turns a year old next year, some of the things I mentioned above may not even apply anymore.
I highly recommend that you read the book 1984 by George Orwell. 70 years later, we slowly see its fictional plot points come to fruition and its dystopic nature might soon become a reality if we don’t learn the importance of being knowledgeable in different areas, even if they aren’t our niche. Diversifying gives us a chance to prevent things like these from becoming our new reality. Not convinced? If you think I’m some kind of conspiracy nut, then I’ll be glad to tell you that this is actually happening right now:
There are many other factors that we have to shed light on moving forward and innovation is only one of them on a macro perspective. That being said though, the only surefire way to adapt is to accept that nothing is permanent, the world is dynamic and that we must embrace change constantly in order to progress in society. As Doc Brown likes to say,