A little lockdown distraction

A few years ago I wrote a piece about smartwatches. It was around the time of the launch of the apple watch which was perceived as a flop. Last week I found myself in a clinical workplace that has been split into multiple zones, splitting teams up physically and increasing complexity. My need for information and communication has increased whilst PPE, gloves and facemasks have made the safe accessing and biometrical unlocking of my mobile devices almost impossible. Incidentally, it  also makes interacting with a computer quite difficult.

This has made me think a bit more about the watch, how that has slowly crept into my life and how it behaves as a companion product, a distraction sorting device.

In the NHS we know that distractions cause errors and that errors kill people. Simples. In the ED we say we take distraction very seriously. One example is that drug checking is done in a side room away from the hustle and bustle. Many of the constant distractions that pepper our day sit in a ‘too hard to tackle’ pile because generally no matter what a distraction is about we only have 3 ways of bringing anything to some-ones attention:

1-     Interrupt someone in person.

2-     Phone someone
3-     Page someone

Any of these interruptions could just as easily be a life-threatening emergency or a coffee with my name on it, or anything in-between. As a result all of these distract me twice. The initial distraction followed by trying to figure out the degree of engagement needed, a mental triage and decision.

In my other professional life, my watch has become my number 1 distraction filter. Messages from those companion apps whose notifications I want to simply know to exist so I can go to them in due course come to my watch via a haptic tap. No engaging with or even reading the message. Technology performs the triage to minimise the first distraction to virtually nothing, and cohorts the second engagement piece into a backlog that be dealt with more efficiently.

 

Those triaged as needing more immediate engagement go to my phone and demand attention. Like this I only engage with the phone (find, look at, unlock, digest, decide) when I actually need to. A phone call really rings the alarm bells, that is urgent.
We are at a time where we are meeting unprecedented challenges in the ways in which we work how we communicate and even the ease with which we can interact with technology. Whilst I’m not sure ED clinicians can get to the Deep Work nirvana of Cal Newman I do think we can use technology to stream our notifications into recognisable bundles of work we can complete like David Allen’s Getting Things Done.

 

I would prefer wrist taps to tell me I have ECG’s or patients to sign off or opinions to review, rather than a constant stream of interruptions. We use companion apps to help us perform focused tasks in our to day lives, why not in medicine? We have to understand what bits of our life devices can best help us with, and create apps for them. Just as digital desktop processes shouldn’t emulate paper processes, digital devices shouldn’t emulate digital desktops.

 

Lets innovate.

20 Years of tech in health

Party like it’s 1999 /

 

Who remembers New Year’s Eve 1999?

 

For many it was a huge night of partying and fireworks, for others a period of reflection, and for a few it was a time of uncertainty. We weren’t sure what the new millennium would bring and there were plenty of crazy stories circulating that the world was going to end. Even if the apocalypse didn’t happen at midnight, the Millennium Bug might strike, and we wouldn’t be able to turn on a computer.

 

For me as a junior doctor working in A&E that night, I was mighty scared. I was an SHO 1, (an FY2 in current terminology). Back then going on a night shift meant packing my backpack full of textbooks: The Oxford handbooks of medicine, surgery and surgical specialties as well as the ABC of radiology and the ABC of dermatology were essentials.

 

It’s difficult to explain the circumstances and context that made me need all those books as backup! Reflecting now, a mere 20 years later, its startling. I think I can best explain it as being a combination of 3 factors:

 

  1. Information poverty
  2. Generalism
  3. Communication poverty

 

 

Information Poverty:

 

At the turn of the millennium, not only was Google in its infancy, but the thought of using the internet to access reliable information that you might use to treat an actual human being in real time was bonkers. In fact, in 1999 my mental model of accessing the internet involved listening to 2 minutes of screeching noises before achieving anything, and even then constantly worrying about being disconnected.

 

My hospital didn’t have an intranet brimming with up to date content either. Any trust pathway or treatment regime had to be designed, printed, photocopied, laminated, and socialised (pinned to a wall or shoved in a folder). Lord forbid should a pathway change: the whole process restarted. The reality was that this meant that most processes fell at least one of those hurdles and took at least a year to complete.

 

When it came to getting expert advice, we had the pager system. My consultant left at 5pm, and my registrar at 10. After that, there was no one to ask quickly or in passing except the other 2 junior doctors on with me, and the frankly amazing nurses who kept me right with their wealth of experience and abundance of common sense. Failing that I would page someone. Specialists didn’t even carry DECT phones.

 

That’s why I took the books, but still, Imagine replacing a consultant, registrar, intranet, google, mobile, whatsapp etc with a pager and some textbooks on a night shift, in a tertiary ED.

 

Thinking back, on new years eve 1999 I pretty much had to relly on my medical school education to decide what to do with patients. No wonder I was worried. I should probably have been more worried. My patients even more so. Reflecting now, there was definitely a bit of that that made me feel empowered as well as scared, I guess that was part of the thrill, part of why I loved and still love A&E.

 

Since then we have grown acustomed to a wealth of resources and multitudes of pathways on the web driving care and that is good. It now feels as though it is time to move on and incorporate these into clinical decision support pathways that are captured ind driven by an electronic health record. GP’s have this down pat.

 

We know that AI is not inferior to humans at a multitude of things: simple chest and ankle x ray interpretation, identifying suspicios breast lumps on mammography, classifying macular degeneration. Why, in todays world, would a junior doctor be left alone to interpret a chest or ankle xray that might not be reviewed for a couple of weeks?

 

 

Generalism:

 

As well as the relative lack of access to information was the incredible number of things I had to do. All cannulas, all bloods, all ECG’s, making up and administering many IV drugs and all first dose antibiotics. Filling in just about every type of form known to man: requesting xrays & bloods, etc..  The better I became at some (I remember the night when I realised I could canulate in the dark) the less well I did with some others: looking at my medical records from then would probably make me wince now.

 

I know that some of that hasn’t gone away really, but at least now ordering tests happens online, comes with some clinical decision support,  doesn’t  require me to wonder round to x-ray for an opinion etc.. I still do of course, for the occasional case, but mostly it happens online with a phone call if necessary.

 

But most staggeringly somewhere in the last 20 years, just as the access to information has blown up, so has the sheer number of specialities and investigations available.

 

On new years eve 1999, a patient would have to be at deaths door to get a CT. Seriously. I remember thinking “I hope I never need a CT head, getting a CT head is like getting a side room on a ward: check out time”.

 

In 1999 I had next to no chance of successfully convincing a radiology consultant to come into hospital to scan anyone for anything at night. I would call my reg at home, who would call my consultant, then get back to me and ask me to call the radiology consultant, who would ask to speak to my consultant, and after an hour  they would agree that it would be best done in the morning. It was like a really bad soap opera: repeatedly and predictably over emotional with the same ending after every episode. In my year of being an ED junior, I maybe requested a dozen CT heads. The ED consultant working this this new years eve will be in the hospital and will have requested that many CT’s during their shift.

 

The sheer number of specialisms and tests available now is stunning: back in 1999 a radiologist would report a trauma CT scan (that literally never got done). In 2019, it’s 3 radiologists: radiology for the chest and abdomen, neuroradiology for the brain, and musculoskeletal radiology for the bones and spine. We can also call interventional radiology if we have a blood vessel issue we think they can sort.

 

And that pattern of hyper-specialisation, and the need for the hyper-specialised brain to be available exists everywhere:

 

In 1999, if you came to A&E after midnight you saw a junior doctor and a nurse, period. Now there are minor injury nurses, junior doctors, clinical assistants, registrars, advanced nurse practitioners, Advanced Practitioners, Physicians assistants, consultants. That’s just the internal ED team. Remote teams like stroke, cath lab, interventional radiology, respiratory, GI etc,  are not only more available, but more present, often joined at the hip with the ED. For example in my hospital the stroke team see pre-alerted patients on arrival to our ED, and teleconference with their on call consultant, who examines patient from home using a video link before consenting them for and prescribing thrombolysis.

 

So in the last 20 years the ED has gone from 3 generalists in white coats and 5 nurses in blue, to many different grades, competencies and specialities of folks in a cacophony of jumpsuit colours: there isn’t a shade of blue red or green that isn’t catered for, for example the junior doctors in Leeds ED wear teal. Teal!!

 

Communication:

Asking about what has changed the most in terms of communication in the last 20 years, for me it is the mobile phone in 1999 and the iPhone in 2007. When I watched Steve Jobs announce the first iPhone I realised that it would change the world, and that medicine couldn’t remain immune. It has shaped my psyche ever since and continues to do so. Apart from AI / AR I have not seen anything so seismic and profound.

 

In 1999 mobiles were not allowed in hospitals because…. Well I don’t actually remember why. I’m struggling to remember or explain.  Wi-fi wasn’t around then, everything was wired, so it can’t have been interference, but there was definitely some way in which mobiles would almost certainly cause hospitals to blow up. I remember that very clearly, and the big scary posters everywhere. If anyone remembers why, please remind me.

 

Even after we realised mobiles didn’t blow things up, we nevertheless found ourselves in the groove of saying no to smartphones in general, by reflex, for just about any reason, no matter how unhinged, anyone could think of: poor battery life, infection control, wireless reception, staff looking distracted, staff not being able to use them, old people not approving of them, etc. Then, one day, and I’m pretty sure it really was one single day, that coin flipped and the very same people saying no for the preceding decade were running around screaming  about why mobiles aren’t entirely engrained in workflow and solving every problem know to man. These will be the very same people now faxing each other about why we shouldn’t wear watches.

 

Before I continue though I have another story to tell about the rate of change, the aim being to demonstrate just how strange the rate is, and how important it is for us to learn to embrace it and build a bridge to it, rather than wait for another coin to flip in a few years time and wonder how we missed the boat.

 

I asked two more experienced (read old) colleagues of mine about how technolgy had changed in the 20 years between 1980 and 2000 for them.

 

One said the only significant change for him was when he got his first mobile in 1998 but couldn’t get reception and it was too expensive to use. He told me that unti then he had to carry a load of 10p pieces in his car when doing visits in case the pager went off, he would have to find a call box!

 

Another elder said that whilst he thought he had encountered online results, again circa 1998, the biggest thing for him in the preceding 20 years was word processing in the mid nineties. That was the single biggest change in 2 decades: being able to write and print a letter without a secretary. Even eMails were post millennium for him at work.

 

The rate of change between the millennium and now has been absolutely explosive compared to the preceding decades, and as much as technology changes, our ability to accept and build bridges to the future has to become second nature.

 

It is with great sadness then that the official way I communicate between speciaties has really not changed that much since. We still have pagers. We still have faxes. I still look at a white board or have to phone switchboard to ask for the page number of the cardiologist on call. Sometimes there is voice recognition at switchboard, so bad it reminds me of trying to use them to book cinema tickets in the mid 90’s.  I spend most of my day wrestling with outdated processes, and an inconsequential amount of time providing either care or the human element of my job that I love so much.

 

Some of my colleagues think that is normal and busy themselves with reasons as to why we couldn’t possibly do anything else. Sadder still, a significant portion think that email or dect phones are the solution to faxes, and I  still hear doctors bragging about how many pagers they have to carry, or how important they are by proxy of how many unread emails they have every day. The idea of bring your own devices, communicating in workflow, or the potential of the constant partial information awareness we use to manage our private day to day simply does not exist in much of the workflow I experience.

 

As I get older, one of my greatest regrets is that I never managed to achieve more trying  to get my profession to take the opportunity as one of the oldest, most personal, most human and important industries in the world, to embrace these changes more quickly. In many ways it has taken our patients to point out to us how the taxi business, the financial  sector,, even retail show us just how transformative transacting private, trusted, important information quickly and conveniently can be.

 

 

Future thinking

 

Let me be clear: AI is coming. It is the how, the whether we chose to be leaders in it or decry and bemoan  it until other industries normalise it or the sheer volume of clinical need becomes so overwhelming that it is thrust upon us is the question we have to answer. At the moment, the regulatory framework for true AI makes doing anything really quite difficult. By definition, it is difficult to explain how a true AI has come about it’s decision, which makes software as a medical device and other types of regulation a little challenging.

 

But looking at the pace of progress, and trajectory that the world has taken in just the last two decades I would argue that we really need it. Mostly for the simple stuff. Just as apple uses differential privacy to provide anonymised intelligence, or big tech is using explainable AI to help understand what made an AI suggest what it did, the solution to the blockers to technology lies at the intersection of clinical design, technology and humanism.  Not ludditism. We need to build the bridge to the future even if we then burn it once we are on the other side.

 

In 1999 I was using books with text and the odd flowchart. Now I use flowcharts provided by various online sources, simple risk scores that my simple brain can handle to decide what I should do. So we already have a really clear concept of simple clinical decision support, and most of us use local alghorythms or validated scores daily. Digitalising these so we can capture and audit / follow the decision process is a really simple, safe step forward. Automating algorythms to look at the clinical record is another step and adding ever more complex (but still simple) baysian and boolian models yet another. Even then, we are nowhere near AI, but crucially we are better, safer and closer.

 

Just as the last 20 years show how much we have already gained from the benefits of mobility and on demand data, we can continue to create more and more systems and process that lower the bar to allow more people to participate in better clinical decision making for ever more focused / stream lined  workflows.

 

The idea of a (moderately) highly trained junior doctor dealing with everything in the ED on the back of their medical education will become ever more incongruous. Where once every aspect of a person’s treatment was down to a few people, even the minor improvements to the clinical decision support we have today will and is empowering ever more focused, more varied roles to be involved in providing better care, or even for patients to better triage and treat themselves. This is all already happening. It’s just up to us to accelerate it.

 

Now, if for example a stroke patient comes in, thanks to good digital decision support, a nurse is on hand with specialist knowledge and skills to assess, investigate and treat them. Radiologists perform the scan and liaise with Doctors in Autralia to report it,A stroke consultant can look at that scan at home come in at the very end of that process if necessary. Compared to being met by me, on new years eve 1999, with no access to a scan, or even me today with access to a scan,  I know which one I would want for me or mine each and every time.

 

True AI – where AI decides what it’s going to do – would have a place in the world of big data.  Just below AI would be the high-level clinical decision support and warnings that would inform me that a specific patient is at risk of X or this antibiotic might be better for that patient. The scores that I use now, might change in real time depending on the results of the latest validated paper, or expert decision group.

 

I remember many a senior telling me stearnly that patients don’t want their doctors looking at the internet to help them. Now that is mostly what clinicans do: maybe not google but something, anything that can get them them and their patients the best information they need the quickest. As guardians of medicine, we need to be stearing towards a better safer future, even if the solutions we provide today aren’t quite there yet.

 

I’m off to party linke it’s 2019. I think it’s a safer better place. If you have any memories of technology you have had to or still do have to experience or what you think the future will mean for you, feel free to share:

Process debt

  1. Agile focuses on product to build fast and decrease cost of change.
  2. Agile management principles can be transferred to corporate management.
  3. Corporates are facing an existential threat from technology and must learn execute in tech to compete.
  4. Agile corporate structures allow you to focus on outcomes, design late, respond fast,  and decrease the cost of change.
  5. This is increasingly indespensible to execute efficiently, compete fast and avoid disruption.

The idea of agile management in IT being of relevance in the corporate world has been around for a long time. Some of us nerds who dabble in both realms have expressed the founding principles of agile in more corporate speak, but essentially the idea of getting teams encapsulating the expertise required to create solutions for customers quickly is a no brainer. After all that’s broadly how companies like to organise themselves.

In agile, our plans carry technical debt. Whenever we choose to code at pace we intentionally trade off either features, quality or compatibility against perceived future need, against cost, against capacity and against time.

However we also often inadvertently trade off future usability by just not thinking clearly about, well, the future.

This is technical debt. Most agile managers will log technical debt as best they can, so they can plan out where clients can and can’t go with what they’ve got over and above the simple feature list or roadmap.

The important thing here is perceived future need, and whilst there is huge overlap with a roadmap or spec sheet this isn’t the same thing.

It’s about thinking about your architecture, users and hardware and what the future holds for them.

Technical debt is where poor decisions somehow shut the new doors that you could have opened. Where your failure to think about the future means your work needs re-doing as opposed to a tweek, and it’s where you can be disrupted.

So again, the purpose of agile IT methodology is to allow you to do design late and to reduce the cost of change should it become necessary.

If we look back to the slow paced iterative world, where developments happen in years, monolithic iterative IT projects often existed as the only game in town, disconnected, serving to replace existing physical process with identical digital solutions procured by managers only able to procure a solution they can perceive.

They designed early, created long backlogs of work, increased complexity, and made change nearly impossible.

So it’s easy to see how this world could miss the massive cultural impact of the cloud, mobile, single identity SoMe, big data, and deep learning in driving the possibilities of IT and human behaviour in creating solutions for them.

This is all mind numbingly obvious, and to a degree passé. Most corporate customers understand and consult experts to develop and design their IT products, and have experts run the agile process, with good stories being fed into tight sprints, overseen by experts managing the portfolio (and any debt).

But I where I see process debt being produced in spades is in corporate. We have pretty much all complained about the amoebic speed of corporate.

Part of the problem is that whilst IT has a reputation for being disruptive and fast paced, most corporates still believe that their specific monolithic processes of the past are the way of their future.

They believe that their business (and boy do they love talking about understanding their business) is today what it was yesterday. It isn’t:

The business of communication is silicon valley.

The business of transportation is silicon valley.

The business of entertainment is Silicon Valley.

The business of diagnostics is silicon valley.

The business of shopping is silicon valley.

The business of procurement is silicon valley.

The business of time is silicon valley.

The business of education is silicon valley.

Hell, the business of business is silicon valley.

Whilst not all companies executing in the industries above may be silicon valley startups, in order to succeed they will have had to learn to directly compete with, execute as well as, as fast as and with as clear an understanding of the future as silicon valley does in order to survive let alone differentiate or thrive.

Recent history teaches us that companies who want to execute in their traditional business have to learn to execute both in the technology of their products and in the agility of their corporate culture and processes.

This is more than merely having a mobile and pinging emails around. This is about culturally understanding in the broadest sense how SoMe, big data, deep learning, and automation can destroy their very raison d’être and that their main competitor is no longer the rest of their industry. It’s silicon valley.

Internal processes (by which I mean leadership, planning, operational management, execution, problem identification and solving) must start paying attention to the shifts in technology and embrace agile structures, creating stories, designing late, and reducing the cost of change.

It’s not about having a flat management structure with an expert somewhere in there who can manage IT products they groupthink. It’s about getting the culture right up the chain so the narrative of a company’s perceived future, and therefor the technical debt it accumulates, affords it a viable future.

The refusal to accept that simple, cloud driven, mobile, connected, automated and learning processes can apply to corporate is allowing corporates to first build and rely on, and then  propogate the creation of slow, complex, dependent, and debt ridden processes that will expose them to massive risk.

Corporate processes need to be redesigned, and in corporate structure as in agile dev ops, the purpose of design is to allow you to plan then execute quicker and later with the primary goal of reducing the cost of change.

Ignore the cost of change at your peril.

I just want a diagnosis

  1. Medicine is a technology. Humans were the technology when the technology was mostly general reassurance, few investigations, few diagnoses, and few treatments.
  2. Increasing volumes of increasingly cheap investigations and diagnoses will increasingly be sold by technology companies (who employ medical staff directly) on patient demand with minimal cost / delay to diagnosis.
  3. Human medicine must reposition itself away from diagnosis to specific and specialised treatment and therapy post diagnosis.

Here is an interesting video from the TV show Maron:

We doctors used to hold all the mystique and power of medicine within us. A Patient came to us with symptoms which they expected our educated and experienced brains to interpret.

We would take a histrory and perform an examination giving us indirect clues as to what was physically going on inside the body (such as knowing the different sound blood makes as it passes through to normal and abnormal heart valves, or the effect of fluid in the lungs on the sound the chest makes when you tap it).

We would put all this together with our formal education and our years of clinical experience to come to one of a relatively few diagnoses and then institute therapies from a similarly limited (and dubious) number of treatments which only we would controll.

Then came the quickening, the march of science and technology, in the form of ever increasing tests, investigations, diagnoses and treatments.

We have become mere processing machines, decreasingly interpreting histories, examinations and instituting therapies but rather ordering from increasingly abundant tests for an even more abundant array of diagnoses which we refer to ever more specialists to instigate a host of personalised treatment plans.

This abundance is just too much for one clinician to know all of. It can sometimes be simple, but when a patient knows that a specialist probably exists just for their diagnosis, be it for counselling, physio, surgery or genomic medicine, then they want us less and they want them more.

We started dealing with huge demand by rationing referral based on scarce availability of expensive tests for diagnoses with only a few treatments. We have encouraged this in our language. GP’s talk are talked about as “gatekeepers”. The ED has become the “front door” to healthcare. We see some symptoms and then try simple cheap remedies with repeated consultations before eventually escalating if things deteriorate. We filter.

But what if the risk and cost of repeated consultations is actually more than the risk and cost of the test. What if patients get wise to the old grey guy who tells them that:

“Nine times out of ten this is nothing, I mean sure, one time you die, horribly, but I like those odds”

We can all see how the machines and technologies we have at our disposal are firmly implanted in our and our patients psyches and we will increasingly find that people don’t want opinion. They want scans or tests. They want diagnoses.

And once we test for things we only validate the initial concern.

This genie is out of the bottle and ever cheaper and more available tests mean trying to put it back in will be impossible. You see technology takes longer than we think to develop, but its effects are insanely fast and getting faster than we might ever imagine. Computer generated interpretation and suggestion of treatments are not far away. This may sound far fetched but the following is undoubtably true in the ED:

We own nothing: A computer tells us which patient to see in which order, what tests to consider, what therapy to initiate, which number to call for which specialist and what bed to put the patient in. I just sort the bits the computer can’t do. Yet.

Technology, you see, already owns the diagnostician.

Providers of these technologies should understand this and move to selling their technology as a direct consumer service as a precursor to specialist care (rather than selling hardware subsequently owned by clinicians). They can disrupt the cost in time and money to diagnosis for patients.

Medicine should move the general community to a specialist community where technology will send increasing numbers of patients for treatment.

The value in general medicine should be placed in comfort and time: to explain & counsel / review in the patients individual best interest, not in volume throughput to coded diagnostic outcomes. It should charge for time and environment in the same way as  chiropractors, hair or massage salons do and develop environments that reflect this professionalism, as these are the industries it will be competing against most when everything else is commodified.

Thoughts on my Apple watch

  1. Non smart watches will be viewed as pure fashion with no function.
  2. A large proportion of society will be uncomfortable with that.
  3. Those who continue to wear them will be the sort of people who make them unfashionable.

There is an argument out there that Apple watch is a flop. It’s only sold a few million. I’m not going to argue too much about whether a new billion dollar market created in a month or so is a flop. I’ve been asked to think more about smart watches in the general narrative of the watch.

My initial thoughts have been that  I use mine a lot, mostly for time but also for notifications, exercise and communication. Anyway, if I am not wearing it, I am not a happy bunny. The odd thing is that I am not unhappy in a way that I can describe, after all we all know the anxiety created by not having our mobile, we feel disconnected from things in general. This is a far more personalised feeling of being disconnected from time.

For me the larger narrative here isn’t about the apple watch, its about peripherals as a whole, where they live and what functions they provide.

Accurate time has been sold as a luxury for years, with brands specialising in providing accuracy in various different situations through different aspects of craftsmanship. Depth certification, split second chronography, day, date, lunar calendars, altitude certification etc.
Then the quartz crisis commodified accuracy and much functionality. Overnight Switzerland lost a vast raft of its horological workforce and a huge amount of market share. Since then CAD, 3D printing, lean and robotic manufacturing has to an extent also commodified craftsmanship, but the brand value of “Swiss made” still exists.

This has narrowed many traditional time pieces to smaller market segments, catering to an extreme high end, through brand exclusivity rather than functionality.

Brands give customers a very public badge of wealth through exclusive access to the unique narrative based exclusively on heritage & mechanical craftsmanship that particular brand provides.

And I am in no way arguing against this. If people differentiate watch value through brand history, it is irrelevant if a function is commodified or not. Its just like clothing or foot-wear branding. It’s fashion pure and simple.

What is important is that when a large chunk of product exclusivity is branding with function broadly commodified, suddenly finding your product is sub functional is a huge risk. If your product becomes viewed as a relatively functionless piece of fashion and nothing else, that could be a nightmare.

Smart watches will start offering increasingly useful and unique personal functions with amazing craftsmanship, at all price ranges, for all lifestyles. Whats more people will soon be comfortable with walking around in a ‘personal cloud’ of multiple devices, all communicating with each other as well as the wider internet of things and an overarching conventional cloud. And to relate back to my Apple watch experience: By ‘comfortable with’ I mean ‘uncomfortable without’.

Watches will become inextricably associated with, (and so become a highly visible indicator) of a fully connected and functional life. In other words these peripherals will become part of process of life and the act of living in general.

The old brands may be viewed as the habitat of the living dead, a niche folly for the disconnected, particularly if they are only worn by very particular social groups. Having spent decades and billions creating and bludgeoning home a narrative of exclusive value not through any new function but rather heritage and the number of cogs or springs, watch makers might want to start looking forwards.

Expect to see a repositioning.

Hello world!

So this is going to be a place where I can think out loud whilst I try to piece together some narratives from the wealth of information we are bombarded with every day.

Lets see where it goes.