ChatGPT and Foundation Models: The Future of AI-Assisted Workplace


ChatGPT and Foundation Models: The Future of the AI-Assisted Workplace

By Yuri Brigance

The rise of generative models such as ChatGPT and Stable Diffusion has generated a lot of discourse about the future of work and the AI-assisted workplace. There is tremendous excitement about the awesome new capabilities such technology promises, as well as concerns over losing jobs to automation. Let’s look at where we are today, how we can leverage these new AI-generated text technologies to supercharge productivity, and what changes they may signal to a modern workplace.

Will ChatGPT Take Away Your Job?

That’s the question on everyone’s mind. AI can generate images, music, text, and code. Does this mean that your job as a designer, developer, or copywriter is about to be automated? Well, yes. Your job will be automated in the sense that it is about to become a lot more efficient, but you’ll still be in the driver’s seat.

First, not all automation is bad. Before personal computers became mainstream, taxes were completed with pen and paper. Did modern tax software put accountants out of business? Not at all. It made their job easier by automating repetitive, boring, and boilerplate tasks. Tax accountants are now more efficient than ever and can focus on mastering tax law rather than wasting hours pushing paper. They handle more complicated tax cases, those personalized and tailored to you or your business. Similarly, it’s fair to assume that these new generative AI tools will augment creative jobs and make them more efficient and enjoyable, not supplant them altogether.

Second, generative models are trained on human-created content. This ruffles many feathers, especially those in the creative industry whose art is being used as training data without the artist’s explicit permission, allowing the model to replicate their unique artistic style. Stability.ai plans to address this problem by enabling artists to opt out of having their work be part of the dataset, but realistically there is no way to guarantee compliance and no definitive way to prove whether your art is still being used to train models. But this does open interesting opportunities. What if you licensed your style to an AI company? If you are a successful artist and your work is in demand, there could be a future where you license your work to be used as training data and get paid any time a new image is generated based on your past creations. It is possible that responsible AI creators can calculate the level of gradient updates during training, and the percentage of neuron activation associated to specific samples of data to calculate how much of your licensed art was used by the model to generate an output. Just like Spotify pays a small fee to the musician every time someone plays one of their songs, or how websites like Flaticon.com pay a fee to the designer every time one of their icons is downloaded.  Long story short, it is likely that soon we’ll see more strict controls over how training datasets are constructed regarding licensed work vs public domain.

Let’s look at some positive implications of this AI-assisted workplace and technology as it relates to a few creative roles and how this technology can streamline certain tasks.

As a UI designer, when designing web and mobile interfaces you likely spend significant time searching for stock imagery. The images must be relevant to the business, have the right colors, allow for some space for text to be overlaid, etc. Some images may be obscure and difficult to find. Hours could be spent finding the perfect stock image. With AI, you can simply generate an image based on text prompts. You can ask the model to change the lighting and colors. Need to make room for a title? Use inpainting to clear an area of the image. Need to add a specific item to the image, like an ice cream cone? Show AI where you want it, and it’ll seamlessly blend it in. Need to look up complementary RGB/HEX color codes? Ask ChatGPT to generate some combinations for you.

Will this put photographers out of business? Most likely not. New devices continue to come out, and they need to be incorporated into the training data periodically. If we are clever about licensing such assets for training purposes, you might end up making more revenue than before, since AI can use a part of your image and pay you a partial fee for each request many times a day, rather than having one user buy one license at a time. Yes, work needs to be done to enable this functionality, so it is important to bring this up now and work toward a solution that benefits everyone. But generative models trained today will be woefully outdated in ten years, so the models will continue to require fresh human-generated real-world data to keep them relevant. AI companies will have a competitive edge if they can license high-quality datasets, and you never know which of your images the AI will use – you might even figure out which photos to take more of to maximize that revenue stream.

Software engineers, especially those in professional services frequently need to switch between multiple programming languages. Even on the same project, they might use Python, JavaScript / TypeScript, and Bash at the same time. It is difficult to context switch and remember all the peculiarities of a particular language’s syntax. How to efficiently do a for-loop in Python vs Bash? How to deploy a Cognito User Pool with a Lambda authorizer using AWS CDK? We end up Googling these snippets because working with this many languages forces us to remember high-level concepts rather than specific syntactic sugar. GitHub Gist exists for the sole purpose of offloading snippets of useful code from local memory (your brain) to external storage. With so much to learn, and things constantly evolving, it’s easier to be aware that a particular technique or algorithm exists (and where to look it up) rather than remember it in excruciating detail as if reciting a poem. Tools like ChatGPT integrated directly into the IDE would reduce the amount of time developers spend remembering how to create a new class in a language they haven’t used in a while, how to set up branching logic or build a script that moves a bunch of files to AWS S3. They could simply ask the IDE to fill in this boilerplate to move on to solving the more interesting algorithmic challenges.

An example of asking ChatGPT how to use Python decorators. The text and example code snippet is very informative.

For copywriters, it can be difficult to overcome the writer’s block of not knowing where to start or how to conclude an article. Sometimes it’s challenging to concisely describe a complicated concept. ChatGPT can be helpful in this regard, especially as a tool to quickly look up clarifying information about a topic. Though caution is justified as demonstrated recently by Stephen Wolfram, CEO of Wolfram Alpha who makes a compelling argument that ChatGPT’s answers should not always be taken at face value.. So doing your own research is key. That being the case, OpenAI’s model usually provides a good starting point at explaining a concept, and at the very least it can provide pointers for further research. But for now, writers should always verify their answers. Let’s also be reminded that ChatGPT has not been trained on any new information created after the year 2021, so it is not aware of new developments on the war in Ukraine, current inflation figures, or the recent fluctuations of the stock market, for example.

In Conclusion

Foundation models like ChatGPT and Stable Diffusion can augment and streamline workflows, and they are still far from being able to directly threaten a job. They are useful tools that are far more capable than narrowly focused deep learning models, and they require a degree of supervision and caution. Will these models become even better 5-10 years from now? Undoubtedly so. And by that time, we might just get used to them and have several years of experience working with these AI agents, including their quirks and bugs.

There is one important thing to take away about Foundation Models and the future of the AI-assisted workplace: today they are still very expensive to train. They are not connected to the internet and can’t consume information in real-time, in online incremental training mode. There is no database to load new data into, which means that to incorporate new knowledge, the dataset must grow to encapsulate recent information, and the model must be fine-tuned or re-trained from scratch on this larger dataset. It’s difficult to verify that the model outputs factually correct information since the training dataset is unlabeled and the training procedure is not fully supervised. There are interesting open source alternatives on the horizon (such as the U-Net-based StableDiffusion), and techniques to fine-tune portions of the larger model to a specific task at hand, but those are more narrowly focused, require a lot of tinkering with hyperparameters, and generally out of scope for this particular article.

It is difficult to predict exactly where foundation models will be in five years and how they will impact the AI-assisted workplace since the field of machine learning is rapidly evolving. However, it is likely that foundation models will continue to improve in terms of their accuracy and ability to handle more complex tasks. For now, though, it feels like we still have a bit of time before seriously worrying about losing our jobs to AI. We should take advantage of this opportunity to hold important conversations now to ensure that the future development of such systems maintains an ethical trajectory.


Additional resources:


What Separates ChatGPT and Foundation Models from Regular AI Models?


What Separates ChatGPT and Foundation Models from Regular AI Models?

By Yuri Brigance

This introduces what separates foundation models from regular AI models. We explore the reasons these models are difficult to train and how to understand them in the context of more traditional AI models.

ChatGPT

What are foundation models, and how are they different from traditional deep learning AI models? The Stanford Institute’s Center of Human-Centered AI defines a foundation model as “any model that is trained on broad data (generally using self-supervision at scale) that can be adapted to a wide range of downstream tasks”. This describes a lot of narrow AI models as well, such as MobileNets and ResNets – they too can be fine-tuned and adapted to different tasks.

The key distinctions here are “self-supervision at scale” and “wide range of tasks”.

Foundation models are trained on massive amounts of unlabeled/semi-labeled data, and the model contains orders of magnitude more trainable parameters than a typical deep learning model meant to run on a smartphone. This makes foundation models capable of generalizing to a much wider range of tasks than smaller models trained on domain-specific datasets. It is a common misconception that throwing lots of data at a model will suddenly make it do anything useful without further effort.  Actually, such large models are very good at finding and encoding intricate patterns in the data with little to no supervision – patterns which can be exploited in a variety of interesting ways, but a good amount of work needs to happen in order to use this learned hidden knowledge in a useful way.

Unsupervised, semi-supervised, and transfer learning are not new concepts, and to a degree, foundation models fall into this category as well. These learning techniques trace their roots back to the early days of generative modeling such as Restricted Boltzmann Machines and Autoencoders. These simpler models consist of two parts: an encoder and a decoder. The goal of an autoencoder is to learn a compact representation (known as encoding or latent space) of the input data that captures the important features or characteristics of the data, aka “progressive linear separation” of the features that define the data. This encoding can then be used to reconstruct the original input data or generate entirely new synthetic data by feeding cleverly modified latent variables into the decoder.

An example of a convolutional image autoencoder model architecture is trained to reconstruct its own input, ex: images. Intelligently modifying the latent space allows us to generate entirely new images. One can expand this by adding an extra model that encodes text prompts into latent representations understood by the decoder to enable text-to-image functionality.

Many modern ML models use this architecture, and the encoder portion is sometimes referred to as the backbone with the decoder being referred to as the head. Sometimes the models are symmetrical, but frequently they are not. Many model architectures can serve as the encoder or backbone, and the model’s output can be tailored to a specific problem by modifying the decoder or head. There is no limit to how many heads a model can have, or how many encoders. Backbones, heads, encoders, decoders, and other such higher-level abstractions are modules or blocks built using multiple lower-level linear, convolutional, and other types of basic neural network layers. We can swap and combine them to produce different tailor-fit model architectures, just like we use different third-party frameworks and libraries in traditional software development. This, for example, allows us to encode a phrase into a latent vector which can then be decoded into an image.

Modern Natural Language Processing (NLP) models like ChatGPT fall into the category of Transformers. The transformer concept was introduced in the 2017 paper “Attention Is All You Need” by Vaswani et al. and has since become the basis for many state-of-the-art models in NLP. The key innovation of the transformer model is the use of self-attention mechanisms, which allow the model to weigh the importance of different parts of the input when making predictions. These models make use of something called an “embedding”, which is a mathematical representation of a discrete input, such as a word, a character, or an image patch, in a continuous, high-dimensional space. Embeddings are used as input to the self-attention mechanisms and other layers in the transformer model to perform the specific task at hand, such as language translation or text summarization. ChatGPT isn’t the first, nor the only transformer model around. In fact, transformers have been successfully applied in many other domains such as computer vision and sound processing.

So if ChatGPT is built on top of existing concepts, what makes it so different from all the other state-of-the-art model architectures already in use today? A simplified explanation of what distinguishes a foundation model from a “regular” deep learning model is the immense scale of the training dataset as well as the number of trainable parameters that a foundation model has over a traditional generative model. An exceptionally large neural network trained on a truly massive dataset gives the resulting model the ability to generalize to a wider range of use cases than its more narrowly focused brethren, hence serving as a foundation for an untold number of new tasks and applications. Such a large model encodes many useful patterns, features, and relationships in its training data. We can mine this body of knowledge without necessarily re-training the entire encoder portion of the model. We can attach different new heads and use transfer learning and fine-tuning techniques to adapt the same model to different tasks. This is how just one model (like Stable Diffusion) can perform text-to-image, image-to-image, inpainting, super-resolution, and even music generation tasks all at once.

The GPU compute and human resources required to train a foundation model like GPT from scratch dwarf those available to individual developers and small teams. The models are simply too large, and the dataset is too unwieldy. Such models cannot (as of now) be cost-effectively trained end-to-end and iterated using commodity hardware.

Although the concepts may be well explained by published research and understood by many data scientists, the engineering skills and eye-watering costs required to wire up hundreds of GPU nodes for months at a time would stretch the budgets of most organizations. And that’s ignoring the costs of dataset access, storage, and data transfer associated with feeding the model massive quantities of training samples.

There are several reasons why models like ChatGPT are currently out of reach for individuals to train:

  1. Data requirements: Training a large language model like ChatGPT requires a massive amount of text data. This data must be high-quality and diverse and is typically obtained from a variety of sources such as books, articles, and websites. This data is also preprocessed to get the best performance, which is an additional task that requires knowledge and expertise. Storage, data transfer, and data loading costs are substantially higher than what is used for more narrowly focused models.
  2. Computational resources: ChatGPT requires significant computational resources to train. This includes networked clusters of powerful GPUs, and a large amount of memory volatile and non-volatile. Running such a computer cluster can easily reach hundreds of thousands per experiment.
  3. Training time: Training a foundation model can take several weeks or even months, depending on the computational resources available. Wiring up and renting this many resources requires a lot of skill and a generous time commitment, not to mention associated cloud compute costs.
  4. Expertise: Getting a training run to complete successfully requires knowledge of machine learning, natural language processing, data engineering, cloud infrastructure, networking, and more. Such a large cross-disciplinary set of skills is not something that can be easily picked up by most individuals.

That said, there are pre-trained models available, and some can be fine-tuned with a smaller amount of data and resources for a more specific and narrower set of tasks, which is a more accessible option for individuals and smaller organizations.

Stable Diffusion took $600k to train – the equivalent of 150K GPU hours. That is a cluster of 256 GPUs running 24/7 for nearly a month.  Stable Diffusion is considered a cost reduction compared to GPT. So, while it is indeed possible to train your own foundation model using commercial cloud providers like AWS, GCP, or Azure, the time, effort, required expertise, and overall cost of each iteration impose limitations on their use. There are many workarounds and techniques to re-purpose and partially re-train these models, but for now, if you want to train your own foundation model from scratch your best bet is to apply to one of the few companies which have access to resources necessary to support such an endeavor.


Additional resources:


Retail Technology and Innovation – a Conversation with Michael Guzzetta


Retail Technology and Innovation – A Conversation with Michael Guzzetta

We recently spent some time with Michael Guzzetta, a seasoned retail technology and innovation executive and consultant who has worked with brands such as The Walt Disney Company, Microsoft, See’s Candies, and H-E-B.

Tell me about your background. What brought you to retail?

Like many people, I launched my retail career in high school when I worked in the men’s department at Robinson’s May. I also worked for The Warehouse (music retailer) and was a CSR at Blockbuster video – strangely, I still miss the satisfaction of organizing tapes on shelves.

I ignited my tech career in 2001 when I started working in payment processing and cloud-based tech, and then I returned to retail in 2009 when I joined Disney Store North America, one of the world’s strongest retail brands.

During my tenure at Disney, I had the privilege of working at the intersection of creative, marketing, and mobile/digital innovation. And this is where the innovation bug bit me and kicked off my decades-long work on omnichannel innovation projects. I seek opportunities to test and deploy in-store technology to simplify experiences for customers and employees, increase sales, and drive demand. Since jump-starting this journey at Disney Store, I’ve also helped See’s Candies, Microsoft, and H-E-B to advance their digital transformation through retail innovation.

What are some of the retail technologies that got you started?

I’ve seen it all! I’ve re-platformed eCommerce sites, deployed beacons and push notifications, deployed in-store traffic counting, worked on warehouse efficiency, automated and integrated buyer journeys and omnichannel programs, and more. I recently built a 20k SF innovation lab space to run proofs-of-concept to validate tech, test, and deployment in live environments. Smart checkout, supply chain, inventory management, eCommerce… you name it.

What are the biggest innovation challenges in retail today?

Some questions that keep certain retailers up at night are, “How can we simplify the shopping experience for customers and make it easier for them to check out?”, “How can we optimize our supply chain and inventory operations?”, “How can we improve accuracy for customers shopping online and reduce substitutions and shorts in fulfillment?” and “How can we make it easier and more efficient for personal shoppers to shop curbside and home delivery orders?” Not to mention, “What is the future of retail, and which technologies can help us stay competitive?”

I see potential in several trends to address those challenges, but my top three are:

Artificial Intelligence/Machine Learning – AI will continue to revolutionize retail. It’s permeated most of the technology we use today, whether it’s SAAS or hardware, like smart self-checkout. You can use AI, computer vision, and machine learning to identify products and immediately put them in your basket. AI is embedded in our everyday lives – it powers the smart assistants we use daily, monitors our social media activity, helps us book our travel, and runs self-driving cars, among dozens of other applications. And as a subset of AI, Machine Learning allows models to continue learning and improving, further advancing AI capabilities. I could go on but suffice it to say that the retailer that nails AI first wins.

Computer vision. Computer vision has a sizable opportunity to solve inventory issues, especially for grocery brands. Today, there’s a gap between online inventory and what’s on the shelf since the inventory system can’t keep pace with what’s stocked and on the shelves for personal shoppers, which is frustrating for customers who don’t expect substitutions or out-of-stock deliveries. With the advent of computer vision cameras, you can combine those differences and see what is on the shelf in real-time to inform what is available online accurately. Computer vision-supported inventory management will be vital to creating a truly omnichannel experience. Computer vision also enables smart shopping carts, self-checkout kiosks, loss prevention, and theft prevention. Not to mention Amazon’s use of CV cameras with their Just Walk Out tech in Amazon Go, Amazon Fresh, and specific Whole Foods locations. It has endless applications for retail and gives you the eyes online that you can’t get in stores today.

Robotics. In the last five years, robotics has taken a seismic leap, and a shift has happened, which you can see in massive, automated fulfillment centers like those operated by Amazon, Kroger, and Walmart. A brand can deliver groceries in a region without having a physical store, thanks to robotic fulfillment centers and distribution centers. It’s a game-changer. Robotics has many functions beyond fulfillment in retail, but this application truly stands out.

What is a missed opportunity that more retail brands should take advantage of?

Data. Data is huge, and its importance can’t be understated. It’s a big, missed opportunity for retailers today. Improving data management, governance, and sanitation is a massive opportunity for retailers that want to innovate.

Key opportunity areas around data in retail include customer experience (know your customer), understanding trends related to customer buying habits, and innovation. You can’t innovate at any speed with dirty data.

There’s a massive digital transformation revolution underway among retailers, and they are trying to innovate with data, but they have so much data that it can be overwhelming. They are trying to create data lakes, a single source of truth, and sometimes they can’t work because of disparate data networks. I believe that some of the more prominent retailers will have their data act together in a few years.

“Dirty data” results from companies being around for a long time, so they’ve accrued multiple data sets and cloud providers, and their data hasn’t been merged and cleaned. If you don’t have the right data, you are making decisions based on bad or old data, which could hurt you strategically or literally.

What do you wish more people understood about retail technology and innovation?

Technology will not replace people. In my experience, technology is meant to enhance the human experience, which includes employees. If technology simplifies the process so much that the employees become idle, they are typically trained to manage the technology or cross-trained to grow their careers. Technology isn’t replacing the human experience any time soon, although it is undoubtedly changing the existing work experience – ideally for the better, both for the employees and the bottom line.

Technology doesn’t always lower costs for retailers. Hardware innovation requires significant capital expenses when it’s deployed chain-wide. Amazon’s “Just Walk Out” is impressive technology, but the infrastructure, cloud computing costs, and computer vision cameras are insanely expensive. In 5 years, that may be different, but today it is a loss leader. It’s worth it for Amazon because they can get positive press, demonstrate innovation, and show industry leadership. But Amazon has not lowered its operating costs with “Just Walk Out.” This is just one example, but there are many out there.

Online shopping will not eliminate brick-and-mortar shopping. If the pandemic has taught us anything, online shopping is here to stay – and convenience is extremely attractive to consumers. But I think people will never stop going to stores because people love shopping. The experience you get by tangibly picking something up and engaging with employees in a store location will always be around, even with the advent of the Metaverse.

Retail technology

What are some brands that excite you right now because of how they use technology?

Amazon. What they have been doing with Just Walk Out technology, dash carts, smart shelves, and other IoT technology puts Amazon at the front of the innovation pack. Let’s not forget that they’ve led the way in same or next-day delivery by innovating with their automated fulfillment centers! They have the desire, the resources, and the talent to be the frontrunner for years to come.

Alibaba. This Chinese company is another retailer that uses technology in incredible ways. Their HEMA retail grocery stores are packed with innovation and technology. They have IoT sensors across the stores, electronic shelf labels, facial recognition cameras so you can check out with your face, and robotic kitchens where your order is made and delivered on conveyor belts. They also have conveyors throughout the store, so a personal shopper can shop by zone, then hook bags to be carried to the wareroom for sortation and delivery prep – it’s impressive.

Walmart and Kroger. Both brands’ use of automated fulfillment centers (AFCs) and drone technology (among many others) are pushing the boundaries of grocery retail today. Their AFCs cast a much wider net and have expanded their existing markets, so, for example, we may see Kroger trucks in neighborhoods that don’t have a store in sight.

Home Depot. They have a smart app with 3D augmented reality and robust in-store mapping/wayfinding. Their use of machine learning is also impressive. For example, it helps them better understand what type of projects a customer might be working on based on their browsing and shopping habits.

Sephora. They use beacon technology to bring people with the Sephora app into the store and engage them. They have smart mirrors that help customers pick the right makeup for their skin tone and provide tutorials. Customers can shop directly through smart mirrors or work with an in-store makeup artist.

What advice do you have for retailers that want to invest in technology innovation?

My first piece of advice is to include change management in the project planning from the start.

There are inherent challenges in retail innovation, often due to change management issues. When a company has been around for decades or even more than a century, they operate with well-known, trusted, and often outdated infrastructure. While that infrastructure can’t uphold the company for the next several decades or centuries, there can be a fear of significant change and a deeply rooted preference for existing systems. There can be a fear of job loss because of the misconception that technology will replace people in retail.

Bring those change-resistant people into the innovation process early and often and invite them to be part of the idea generation. Any technology solution needs to be designed with the user’s needs in mind, and this audience is a core user group. Think “lean startup” approach.

My second piece of advice is to devote enough resources to innovation and give the innovation team the power to make decisions. The innovation team should still operate with lean resources, focusing on minimum viable products and proofs of concept, so failures aren’t cost-prohibitive. The innovation team performs best when it has the autonomy to test, learn, and fail as they explore innovative solutions. Then, it reports its findings and recommendations to higher-ups to calibrate and pivot where needed.

In closing, I’d say the key to innovation success is embracing the notion of failure. Failure has value! Put another way; failure is the fast track to learning. Learning what not to do and what to try next can help a retail company to accelerate faster than the competition. Think MVP, stay lean, get validated feedback quickly, and iterate until you have a breakthrough. And always maintain a growth mindset – never stop learning and growing.

Additional resources:


Digital Twins, Machine Learning, and IoT


Digital Twins, Machine Learning, and IoT

Digital twins are part of the Internet of Things (IoT) interconnected system. In 2021, Accenture positioned them as one of the top five strategic technology trends to watch.

Image credit: Noria Corporation

As the name suggests, a digital twin is a virtual model designed to reflect a physical object. Companies like Chevron are using digital twins to predict maintenance issues faster, and Unilever used one on the Azure IoT platform to analyze and fine-tune factory operations such as temperatures and production cycle times.

With a digital twin, the object being studied is outfitted with sensors related to key areas of functionality to produce data about aspects of the physical object’s performance, such as energy output, temperature, and weather conditions. The data is relayed to a processing system and applied to the twin. 

Once informed with this data, the digital twin can run simulations, study performance issues, and generate possible improvements, all while generating insights that can be applied to the physical object.

Sometimes digital twins include a rich immersive visual experience, but that’s not always the case. Sometimes they have a simple interface or no interface at all.

Digital Twins are part of the evolution of IoT within the digital transformation. They are used often today in commercial real estate and facilities planning, and as we think about the metaverse, digital twins take on increasing importance with virtual spaces. When you think about the implications of machine learning on digital twins and the IoT, the possibilities for real-time smart monitoring get very interesting.

Imagine a large corporate campus that has been turned into an enormous digital twin that expands to other campuses and physical locations. What if that digital twin uses machine learning to optimize things like traffic, utilities, and weather? How could a global company use digital twins to have a complete model of the physical world?

Here is our biggest tip for anyone considering digital twins as part of a project strategy:

We like to start by considering the existing tools. A robust set of tools already exists through companies like Microsoft and Amazon Web Services TwinMaker (both of which are Valence partners).

Leverage existing industry ontologies (data dictionaries) like schema and naming systems and data formats for interchange within communities. You’ll benefit from established best practices and from broader operability between third-party vendors.

Microsoft contributed industry standards for digital twin definition language that make it simpler to build, use, and maintain digital twins.

The underlying services are provisioned automatically so developers can build upon a platform of services and extend the existing Microsoft or Amazon product. The process isn’t turnkey, and you won’t be able to create a digital twin using completely out-of-the-box tools, but the platform is managed for you, which lowers the operation costs. The platforms are also more secure and designed with best-operating practices in mind such as automatic back-up and built-in deployment automation.

Building upon industry standards will also save you time. For example, if you want to create a smart building solution and need to describe a building’s physical space, industry standards will help since software developers don’t usually have a facilities or building management background. An industry-standard model gives developers an advantage when creating a digital twin that their clients can understand and use.  

Data-driven solution

Digital twins create a platform to measure and store data. With the data available, you can test and answer both operational and business questions. For example, you can investigate fragile risky components in your supply/production system and explore opportunities to improve and expand new services. The key is that measuring and storing the data are essential steps before using any analytical tool.

Digital Twins are Evolving

While building a digital twin is more difficult than what can be done by a typical business user, we can develop these complex systems with a modest team of developers and designers. We typically only need to bring in highly specialized engineers when there are heavy integration and interoperability challenges with several vendors.

The technology is evolving, and early-stage challenges with vendor integration will improve over time, making it easier to transition a digital twin solution from one cloud provider to another.

One of the keys to digital transformation is challenging how we do things today to explore how to get more computerization and automation involved. Can digital twins improve your organization’s warehousing and distribution? Can digital twins improve the challenges faced in the supply chain? Can your sustainability goals be tested with a digital twin? There are many possibilities to consider!

Additional Resources:


Announcing the Blockchain Innovation Accelerator

Announcing the Blockchain Innovation Accelerator

We were excited to announce yesterday the release of two new innovation programs related to blockchain technologies. The two releases include the Blockchain Innovation Accelerator as well as an internal, employee-focused crypto marketplace built on blockchain technologies.

The Blockchain Innovation Accelerator is focused on accelerating customer projects related to understanding and applying blockchain technologies in market today. Focused on a loyalty points scenario in which a sample airline utilizes blockchain to allow customers and vendors to redeem and exchange airline frequent flyer miles (or “tokens”) using smart contract technology, the accelerator offers reference architecture and sample code to create a more approachable scenario as well as accelerate development efforts. The Blockchain Innovation Accelerator is built on Microsoft Azure using the Ethereum blockchain and implements the ERC20 Token Interface with custom smart contract modifications to restrict the flow of tokens between and among whitelisted participants. This is the second Innovation Accelerator released by Valence, coming after the most recent release of the HoloLens Innovation Accelerator this past May. Built by the Valence Innovation Team, the Innovation Accelerator program strives to provide a specific — and often vertical, industry-oriented — framework to help “jumpstart” real-life solutions.

“At Valence we focus exclusively on digital transformation technologies and how they work together to deliver real business results for customers,” said Jim Darrin, Managing Director of Valence. “We believe blockchain will change the nature of distributed systems, and so we have assembled a set of software components and reference architectures to accelerate the ability to both build integrated loyalty points solutions as well as make blockchain more approachable generally for enterprise customers everywhere.”

Additionally, the company today announced the release of their internal, employee-focused crypto marketplace. Built on Hyperledger Sawtooth blockchain technologies, this internal marketplace enables the issuing and redemption of Valence cryptocurrency called Valence Electrons (symbol: VLE). When joining Valence, new employees get a fixed number of VLE which they can spend on the internal Valence marketplace to purchase unique goods and services. Each step of the transaction is recorded on the blockchain and over time, VLE will have increasing value correlated with the growth of the company. Additionally, the company expects to enable employees to post their own offerings and create a barter environment to increase the number of unique goods and services available.

“I am incredibly excited to release this internal marketplace based on the core concepts of blockchain,” said Matthew Carlisle, Technology Director at Valence. “We strive to make all our digital transformation technologies approachable to employees, and we believe this idea of creating our own internal token system for redemption of goods and services will be a great way for employees to experience blockchain and token technologies first hand. And we’re thrilled to be able to share our experience with the rest of the world as well.”

Additional Resources:

Marketplaces, Blockchains, Smart Contracts, Oh My!

Cryptocurrency Marketplace, Blockchain, Smart Contracts, Oh My!

In this post, we talk about how to use blockchain technology to improve the employee experience with a cryptocurrency marketplace.

At several previous companies I’ve been given jackets, t-shirts, mugs and other nice employee perks — the simple stuff. While these have always been well meaning by company leadership, it’s usually been the case that these perks were not entirely to my own personal taste. After all, it is pretty hard to imagine a scenario where you are able to accommodate the tastes and interests of all employees. But what if there was a way to customize employee merchandise to what each employee really wants?

At Valence we like to use the digital transformation technologies we evangelize to our customers to advance our own business. You certainly have heard this before — the old “eating your own dog food” program. But for us we not only learn how these technologies work but also helps us (1) create a learning environment for all our employees and (2) have real-life experience we can share with our customers. To that end — and to tackle the issue of employee preference — we developed our own merchandise cryptocurrency marketplace using Hyperledger Sawtooth, one of the most popular blockchain technologies available today.

The user experience is simple: when joining Valence our employees get a fixed number of Valence Level Electrons (symbol: VLE), and we have an internal cryptocurrency marketplace website where employees can spend the VLE for various Valence-branded items you might not normally find: wireless headphones, beer growlers, and more. More VLE is given to employees over time as and when performance and situations allow.

If you’re saying to yourself right now, “um, folks… in 2017 we just called this a ‘website’”, then I’d say you are both right and wrong. Right in that — yes, it’s a website! But wrong in that the sense that the underpinnings of our marketplace use blockchain technologies. And this underpinning allows us to think about a few new things that would not be easily possible with just a “website”: (1) a level of integrity and transaction auditing, and more importantly (2) the ability to easily open this up more broadly to other vendors. After all, why couldn’t other companies in the local area start accepting VLE as a currency for the exchange of goods and services? It is both possible and totally reasonable.

For larger companies this starts to make a lot of sense. It allows them to give employees more choice and, in theory, spend less on perks as there would be less waste. Plus, it allows employees more choice because blockchain technologies allow for very simple integration, so they can accept VLE instead of the good old U.S. dollar (USD). In order to enable this, we would need to create a reimbursement flow as well assign a starting VLE to USD exchange rate. But note that this can all be done in a trustless environment. In other words, in order to add a vendor that accepts VLE, we don’t need to trust that company and they don’t need to trust us. Smart contracts running on the blockchain enforce the rules that govern our business relationship. Each party can do what they want as long as they follow the rules.

There are many companies out there that issue loyalty points, with airlines certainly one of the largest. As their users start to accumulate large amounts of points this creates several problems: first, customers accumulate too many points and thus may switch airlines, and second — and in many cases a bigger issue — it creates an outstanding financial liability for the company. As a reaction, airlines try to create innovative ways to solve these problems. In fact, recently I saw a glass of fancy champagne available to buy with my Delta Airline SkyMiles. And yes, I can already exchange my SkyMiles for a few other things but honestly the selection is really limited — and there is a reason for that: you can only imagine the IT integration headaches to onboard a vendor into the Delta Airline IT systems so that they could access the user’s SkyMiles balance in order to redeem points for goods or services.

Realizing this scenario is one that many customers might struggle with, we built and released today our Blockchain Innovation Accelerator focused on loyalty points systems. Built on Microsoft Azure and Ethereum, this Innovation Accelerator provides a reference architecture and sample code to make these technologies more approachable. In general, moving loyalty points — or scenarios similar to this — to blockchain technologies offers the potential to create a much larger market for those points by solving issues associated with the current system.

Additional Resources:

Transformational Technology: Don’t Blink or You’ll Miss It

Transformational Technology: Don’t Blink or You’ll Miss It

Every generation experiences transformational technology. I used to imagine what life was like for my great-grandmother, who was born in 1900. Can you fathom witnessing the rise of the airplane, the Model-T, nuclear power, space flight, personal computing, and today’s era of linkable, sharable, searchable, mobile, on-demand, smart, multi-platform everything?

How could anyone alive in 1903 imagine that those first improbable aviation tests by Orville and Wilbur Wright would land us in an era where 3.6 billion passengers would take a commercial plane flight in a single year? Back then, people mainly laughed at the idea of flying in the first place.

Fast forward to the 1940s and ’50s, when the first computers overtook entire rooms with vacuum tubes and enough metal to construct a passenger train. At the time, nobody in their right mind would have predicted that average school children in 2018 would have access to computers of their own and that they’d regularly take them to and from school, in backpacks.

In just over a century, the world has changed so profoundly that a time-traveler from 1900 would barely recognize anything.

The same thing is happening now in technology, but at a faster pace. The World Wide Web didn’t exist until 1990. Amazon was just a small online book retailer before 1998. The iPhone debuted only 11 years ago, and it has now sold north of 1.2 billion units.

Technology that might have seemed laughably impractical and futuristic only 15 years ago (or that had newly been released to the public) now forms the basis of everyday life. Think about how you navigate from here to there, touch base with friends, buy stuff, watch movies, collaborate with co-workers, find the closest phone repair shop, see when your local coffee shop opens, make dinner reservations, figure out what to do with spaghetti squash, check your bank balance, get rides around the city, etc. etc. etc.

That has to do with massive adoption of (and advances in) what at the time was transformational technology — like GPS, smart phones, publicly accessible APIs, cloud-computing, Big Data analytics, and data security.

If this were 2000, GPS would have just been introduced to the public with its current level of accuracy. Only 8 years later, it was already fully integrated into consumer, industrial, and civic applications and had made Wired’s Top Technology Breakthroughs of 2008. As Wired aptly noted (and as we all know from personal experience) it’s used for just a few things:

We use GPS to navigate our car trips and manage fleets of taxicabs, trucks, buses and rental cars. First responders and package-delivery services rely on GPS. Airplanes fly with it. Fishing boats find their way to rich waters with it. Researchers track wildlife with it, and we even find our way down wilderness trails with it.

Cloud-computing wasn’t even mentioned for the first time until 1996. A brief 10 years later, Amazon Web Services launched. By 2007, Netflix started streaming on-demand video, and enterprises were moving quickly to migrate their data and business operations to the cloud. Today 96% of IT professionals polled for a 2018 RightScale survey use a cloud strategy for their enterprise, and the massive cloud migration continues.

What accounts for this kind of explosion of transformational technology? Three things: cost reductions, tooling and platform availability, and the acquisition of advanced engineering skills. As costs come down, this kind of leading-edge tech becomes accessible to small- and medium-sized enterprises, not just national governments or large businesses. Once things start to gain traction, developers push to acquire relevant skills, so they can improve the tools and platforms behind the new tech. This sets up a positive feedback loop, with the availability of better tools and platforms enticing more developers to acquire skills, which further pushes technology adoption and demand for better tools.

I hate to use clichés, but with these transformational technologies, you blink and you miss them. That’s why our team at Valence continually stays up on the latest developments, and it’s why we keep in touch with leading-edge experts. We know that what seems fantastical today may be foundational tomorrow. And we make a practice of understanding what’s likely to catch fire. We predict that our six pillars of innovation — Voice & Chat, Blockchain, IoT, Artificial Intelligence, Augmented Reality & Virtual Reality, and Robotics — will be as widespread in the business world in 10 years as GPS and cloud-computing are today.

How about a Dose of Virtual Reality to Ease the Pain?

How about a Dose of Virtual Reality to Ease the Pain?

Virtual Reality and Pain Management

What role do technologies like virtual reality play in patient care for issues such as pain management?

I hate going to the dentist. That insidious high-pitched squeal sends shivers up my spine. The cold water they spray in your mouth makes my teeth ache. For me, it’s a guaranteed hour of incredible discomfort, stress, and pain — that kind of high-pitched pain that only happens when someone inserts sharp objects under the tender tissue at your gum line.

My dentist lets me watch Netflix, to distract me from the experience. But I can still see the razor-sharp instruments approaching out of the corner of my eye. Plus I hear everything going on around me. Including that insidious high-pitched squeal.

The good news is that there is a better way! Enter virtual reality for pain management. It turns out it can help with fear and anxiety, too.

Researchers have actually been studying this and conducting legitimate case-control studies that are getting published in medical journals. We’re seeing all sorts of collaborations between health plans or hospitals plus VR headset makers plus insurance companies plus digital tech firms and even pharmaceutical giants.

One intriguing experiment was done at Cedar-Sinai Medical Center in Los Angeles. It’s a major teaching hospital in a large urban area, so they get all sorts of patients coming through the door, from people having heart attacks to others limping in with broken bones. The idea was to compare how much pain in the hospital felt when they used immersive 3D VR goggles and headphones vs. watching 2D nature videos, which is one technique doctors use now to help calm and soothe patients and distract them from pain.

For anyone who has tried (and failed) to get kids away from a video game to come to dinner, it’s probably not a huge surprise that the patients in the VR group became fully immersed in the virtual world.

But the statistical results were really impressive: The VR folks experienced a roughly 25% drop in pain levels, plus twice the pain relief compared to the regular video watchers. Which is a big deal when you think about our current national crisis with opioid overuse and addiction.

Imagine if doctors could prescribe fantasy vs. fentanyl (meaning harmless VR sessions vs. dangerous, addictive drugs). The head researcher on this study, Brennan Spiegel, believes that this isn’t so far-fetched. He can picture a day when futuristic pharmacies might actually prescribe specialized VR to patients.

Why might this work? Spiegel says this: “The simplest theory is that it’s just distraction. It’s like shining a bright light right into the brain and almost overwhelming it with signals so it runs interference with the brain. Because the brain is so immersed in the experience, it’s unable to simultaneously process the pain signals coming from the body.”

The mechanism might be slightly different in helping people deal with anxiety vs. pain, but it seems to work, nonetheless. One study has focused on using VR to help veterans recover from PTSD by continually confronting the same event that traumatized them over and over again, this time from a safe vantage point in a virtual environment. With VR headsets, which they can borrow to do homework, patients can always just walk away if things get too intense. This allows PTSD survivors to focus on working through their trauma and anxiety in manageable baby steps.

The same applies to people with different kinds of fears and phobias. Right now therapists use exposure therapy to help people master phobias. Sufferers are exposed to the source of their fear, say spiders, in real life, over and over again. Eventually, over time, they get desensitized and lose their fear. With VR, therapists can expose people to a virtual fear-inducing environment, slowly increasing the spiders or darkness or height or whatever as the patient calms down (which is measured by tracking brain waves).

Imagine if you used these technologies but could then hook people up to a wearable device like an Apple watch or an iPhone to detect the brain waves, and enable patients to get real-time home-based biofeedback? It’s an intriguing idea and a real possibility given how accessible and affordable today’s digital tools have become.

This is the kind of innovation that is transforming patient care, improving outcomes, and reducing costs, while also generating troves of valuable data. That’s what our team at Valence is about, and it’s what we offer our healthcare clients, as well.

At Valence, we can add augmented reality or virtual reality experiences for our clients and let them collect and view real-time patient data at the same time. It’s exciting to imagine the possibilities. Interested in hearing more? Contact us, and we’ll start you off with a demo, to show how remarkable this technology can be!