How to Be a Project Manager in the Age of Digital Transformation


How to Be a Project Manager in the Age of Digital Transformation

by Deborah Keltner

Are you curious about the life of a project manager working in technology and innovation?  

The project manager is the glue that holds a project together, linking the client and partner teams with the technical resources needed to deliver the work. Good project managers are in demand because the integration of technologies, systems, and platforms demands sophisticated project management approaches. A good project manager demonstrates emotional intelligence, business savvy, excellent processes, and confidence holding themselves and others accountable.  

Working in digital transformation means that the work a project manager oversees can range from a blog post and digital marketing campaign to an artificial intelligence project that transforms business operations, and everything in between.  

At Valence we hire Project Managers, Marketing Project Managers, and Technical Project Managers. There is significant overlap in the roles, noting that Technical Project Managers often work closely with highly technical development teams and need a functional knowledge of technical concepts such as cloud technology and agile processes. 

We talked to several of our project managers at Valence to understand the day-to-day realities and pull the curtain back on technical project management at our innovation company.  

project management

Collaborate with the client to define the project vision, goals, and process 

“A lot of my interactions with clients involve asking questions to get to the heart of what they want or need. Our clients come to us with an idea of what they want, and they need me to figure out how to pull the team together to get it done,” says Angela Kaiser, Valence senior project manager.  

Our clients and partners often come to us with a high-level vision of their project, and the first order of business for a project manager is to provide the guidance needed to define the project, its features, and its desired results. This is often achieved during regular meetings that cover a lot of territory – the project manager is there to create clarity, move the project forward, and maintain accountability for progress. 

Project managers are ready. The ability to prepare for and conduct a meeting is core to success. Preparation includes being clear on meeting goals, identifying the right attendees, and creating an agenda. When conducting the meetings, our project managers create space for various perspectives to be heard while keeping momentum and progressing the project. PMs follow up with a clear, concise, useful distillation of meeting notes and actions so the team can be held accountable. 

Accountable for accountability 

“I want the designers, content writers, and engineers to feel like I’m on the team – being in the trenches working on product scripts isn’t always glamorous, but it builds cohesiveness with the team. That’s my leadership style.” Glen Lewis, Technical Project Manager at Valence. 

You can’t talk about project management without talking about accountability. The entire team, including clients and partners, depends on the project manager to track action items, deadlines, and the project status. While the accountability to get tasks done is distributed throughout the team, the ultimate responsibility for ensuring the project is completed on time and with the desired result falls on the project manager. So, you should be comfortable tracking commitments, holding people to them, and of course meeting any commitments that you make. 

“As the project manager, I need to bring in the right people and then help those people stay focused on what the project needs next. My role involves lots of follow-up, some chasing down, and open transparent communication,” says Allison Pass, a senior project manager with Valence since 2019.  

Holding people accountable doesn’t mean the same thing as nagging! In fact, a soft skill that is core to successful project management is making it easy for people to understand what is needed from them and following up effectively. It helps to like people and the teams you are working with.  

Our project managers work with clients and partners around the world. Any given meeting may have attendees from Iceland, Seattle, Nashville, and Munich. Having a natural ability to connect with diverse people and backgrounds makes the back-and-forth of getting projects done easier and more fun for everyone. 

Inevitably, something will happen with a client or project that requires the team to switch gears or make last-minute changes to a deliverable. Project managers need a steady hand and a flexible approach when those moments come up because they set the tone for the team and also help the client to understand the implications of their change.  

Holding people accountable is easier in a culture where leadership supports its project managers. We support project managers by making it clear that we have our team’s back, and our focus is on making the situation better. We expect the same from our clients and are happy to report that our clients are just as solution-oriented as we are. 

Part of a collaborative team of smart and creative people 

“I’ve reached out to people who weren’t on my project for help, suggestions, collaboration – and all of our coworkers are always willing to help and give time to others even when it’s not their project,” says Angela Kaiser, “I’ve been pleasantly surprised because a lot of us are remote and haven’t been able to meet in person yet, so the collaboration really surprised me and I love it.”   

In addition to the project team, which includes clients and partners, a project manager is also part of the larger Valence team. Our flat, open-door culture means there is a lot of collaboration with team members who may not be assigned to your project.  

Angela continues, “Before I was a project manager, my jobs were about paying attention to only what I was doing. As a project manager, it’s my job to help make other people’s jobs easier, to help organize workstreams, and to help our team collaborate to get the job done. I love working with other people and seeing what I can do to help them.”  

Our project managers work with an eclectic group of people in a unique environment. The combination of skills and professionals needed to deliver innovation and digital transformation means there is a lot of variety in the perspectives of the team. This also means that it’s a fertile environment for a variety of opinions, challenges to norms and assumptions, and creative thinking across strategy, design, and engineering teams. Being a project manager puts you at the center of these conversations.  

“I’m excited to see my coworkers in meetings because I genuinely like them,” says Glen Lewis, “I wake up in the morning and I’m excited to hang out with these people, have dynamic discussions, and talk about things that excite me.” 

“You can expect to work for a great company with some really talented people and a great internal support system. Any time I run into questions and want to know more about something, there Is someone at Valence who will help me out. If I reach out, the help is always there,” adds Allison Pass. 

Closing 

“The company puts me in these roles for two reasons: One is to deliver per the contract, and the other is to provide an experience that makes our clients want to come back to Valence. They have a problem and I’m here to help them. I’m very motivated by the results,” says Glen Lewis. 

Project management can offer a long and rewarding career path. The digital transformation industry is incredibly dynamic and the project teams are motivated, innovative, and creative. Project managers get to work with clients and partners from some of the biggest brands in the world as they work on their most innovative projects. It’s hard work, and the successes are quite a thrill.  

Additional Resources:

How to Develop a Data Retention Policy


How to Develop a Data Retention Policy

by Steven Fiore

We help organizations implement a unified data governance solution that helps them manage and govern their on-premises, multi-cloud, and SaaS data. The data governance solution will always include a data retention policy.

When planning a data retention policy, you must be relentless in asking the right questions that will guide your team toward actionable and measurable results. By approaching data retention policies as part of the unified data governance effort, you can easily create a holistic, up-to-date approach to data retention and disposal. 

Ideally any group that creates, uses, or disposes of data in any way will be involved in data planning. Field workers collecting data, back-office workers processing it, IT staff responsible for transmitting and destroying it, Legal, HR, Public Relations, Security (cyber and physical) and anyone in between that has a stake in the data should be involved in planning data retention and disposal.

The first step is to understand what data you have today. Thanks to decades of organizational silos, many organizations don’t understand all the data they have amassed. Conducting a data inventory or unified data discovery is a critical first step.  

Next, you need to understand the requirements of the applicable regulation or regulations in your industry and geographical region so that your data planning and retention policy addresses compliance requirements. No matter your organization’s values, compliance is required and needs to be understood.

Then, businesses should identify where data retention may be costing the business or introducing risk. Understanding the risk and inefficiencies in current data processes may help identify what should be retained and for how long, and how to dispose of the data when the retention expires.

If the goal is to increase revenue or contribute to social goals, then you must understand which data affords that possibility, and how much data you need to make the analysis worthwhile. Machine Learning requires massive amounts of data over extended periods of time to increase the accuracy of the learning, so if machine learning and artificial intelligence outcomes are key to your revenue opportunity, you will require more data than you would need to use traditional Business Intelligence for dashboards and decision making.

data retention policy

What types of data should be included in the data retention policy?

The types of data included in the data retention policy will depend on the goals of the business. Businesses need to be thoughtful about what data they don’t need to include in their policies. Retaining and managing unneeded data costs organizations time and money – so identifying the data that can be disposed of is important and too often overlooked.

Businesses should consider which innovation technologies are included in their digital roadmap. If machine learning, artificial intelligence, robotic process automation, and/or intelligent process automation are in your technology roadmap, you will want a strategy for data retention and disposal that will feed the learning models when you are ready to build them.  Machine learning could influence data retention policies, Internet of Things can impact what data is included since it tends to create enormous amounts of data. Robotic or Intelligent Process Automation is another example where understanding which data is most essential to highly repeatable processes could dictate what data is held and for how long.

One final note is considering non-traditional data sources and if they should be included. Do voice mails or meeting recordings need to be included? What about pictures that may be stored along with documents? Security camera footage? IoT or server logs? Metadata? Audit trails? The list goes on, and the earlier these types of data are considered, the easier they will be to manage.

Avoid these pitfalls

The paradox is that the two biggest mistakes organizations make when building a data retention policy are either not taking enough time to plan or taking too much time to plan. Spending too much time planning can lead to analysis paralysis letting a data catastrophe occur before a solution can be implemented. One way to mitigate this risk is to take an iterative approach so you can learn from small issues before they become big ones.

A typical misstep by organizations when building a data retention policy is that they don’t understand their objectives from the onset. Organizations need to start by clearly stating the goals of their data policy, and then build a policy that supports those goals. We talked about the link between company goals and data policies here.

One other major pitfall organizations fall into when building a data retention policy is that they don’t understand their data, where it lives, and how its interrelated. Keeping data unnecessarily is as bad as disposing of data you need – and in highly silo-ed organizations, data interdependencies might not surface until needed data is suddenly missing or data that should have been disposed of surfaces in a legal discovery. This is partially mitigated by bringing the right people to the planning process so that you can understand the full picture of data implications in your organization.

In closing

The future of enterprise effectiveness is driven by advanced data analytics and insights. Businesses of all sizes are including data strategies in their digital transformation roadmap, which must include data governance, data management, business planning and analysis, and intelligent forecasting. Understand your business goals and values, and then build the data retention policies that are right for you.

We are here to help.

Additional Resources:

The Right Data Retention Policy for Your Organization


The Right Data Retention Policy for Your Organization

by Steven Fiore

Every business needs a strategy to manage its data, and that strategy should include a plan for data retention. Before setting a data retention policy, it’s important to understand the purpose of the policy and how it can contribute to organizational goals. 

There are four values that drive most businesses to do anything:  

  • To make money and increase revenue
  • To save money by decreasing costs
  • Because they must comply with regulations
  • Because they want to use the business as a platform for social good

While each of these values will be represented in any organization, some investigation will usually reveal that one or two of these values outshine the rest. Which values are most important will vary from one organization to another. 

Organizations need to start by clearly stating the goals of their data policy, and then build a policy that supports those goals. We help companies unearth business drivers so data policies can contribute to the company values and goals rather than compete with them. 

In this post, we explore best practices in establishing and maintaining a data retention policy through the lens of these business drivers.  

What are the goals of your data retention policy?

Value: Make Money

Companies that rely on advertising revenue like Google and Facebook want to keep as much data as necessary to maximize revenue opportunities.  

Companies that mine their data can spot trends in their data that inform product enhancements, improve customer experience (driving brand loyalty), and reveal revenue opportunities that would have otherwise been hidden. 

In both cases, the data retention policy should focus on what data can contribute to revenue, and how much of it is needed. Balancing aggregate data versus more granular data is the key so you retain enough data to achieve your objectives without retaining unneeded data that adds cost, complexity, and security or privacy risks.   

Value: Save Money

Many businesses focus on the bottom line and prioritize efficiency to avoid wasting time, money, and energy. 

Businesses that want to save money can use data retention to make the organization more efficient. While data storage is inexpensive, it isn’t free – and access can be more expensive than storage. So, for an organization that wants its data policies to help save money, the policy might focus on retaining only the data that is necessary to avoid extra storage and management overhead. 

Further, retaining more data than you need to can be a legal liability. Having a data retention and disposal policy can reduce legal expenses in the event of a legal discovery process.  

There’s also an efficiency cost to data – the more data you have, the slower the process will be to search and use that data. So, data retention policies can and should be part of a data governance strategy aimed at making the data that is retained as efficient to manage and use as possible. 

Value: Comply with Regulations

Many industries have their own regulations while some regulations cross industries. Businesses that must have a data retention policy may need it to comply with laws that govern data retention such as the Sarbanes Oxley Act, the Health Insurance Portability and Accountability Act (HIPAA), or IRS 1075. Even US-based companies may be subject to international legislation such as the European General Data Protection Regulation (GDPR), and companies that have customers in California need to understand how the California Consumer Privacy Act (CCPA) can impact data retention. Government agencies in the US are also bound by the Freedom of Information Act and some states have “Sunshine” laws that go even further.  

Businesses that are motivated to comply with regulations will need their data retention policy to reflect federal, state, and local requirements, and will need to document compliance with those requirements. 

Value: Business as a Platform for Social Good

 Whether an organization was established as an activist brand or has been drawn to social responsibility as investor demand has risen social responsibility, many companies are finding ways to use data to understand their social and environmental impact.  This impact is often also reported on through Environmental Social Governance (ESG) reporting, Carbon Disclosure Projects, and reporting structures like GRESB (Global Real Estate Sustainability Benchmark). 

In these cases, organizations that use their business as a platform for social good, may identify key metrics such as energy consumption or hiring data that can be used to inform reports on social responsibility.  

In closing

By understanding your organization’s values and priorities, you can ensure that its policies support those values. Every company has data to collect, manage, and dispose of, so it’s critical to have a roadmap for how to address data requirements today and into the future. This framework is a starting point to that effort because there’s nothing worse than going through the effort to implement a complex policy, only to discover that it moves the business further from its goals.  

Additional resources:

Types of Machine Learning

Training the Machines: An Introduction To Types of Machine Learning

by Yuri Brigance

I previously wrote about deep learning at the Edge. In this post I’m going to describe the process of setting up an end-to-end Machine Learning (ML) workflow for different types of machine learning.

There are three common types of machine learning training approaches, which we will review here:

  1. Supervised
  2. Unsupervised
  3. Reinforcement

And since all learning approaches require some type of training data, I will also share three methods to build out your training dataset via:

  1. Human Annotation
  2. Machine Annotation
  3. Synthesis / Simulation

Supervised Learning:

Supervised learning uses a labeled training set of both inputs and outputs to teach a model to yield the desired outcome. This approach typically relies on a loss function, which is used to evaluate training accuracy until the error has been sufficiently minimized.

This type of learning approach is arguably the most common, and in a way, it mimics how a teacher explains the subject matter to a student through examples and repetition.

One downside to supervised learning is that this approach requires large amounts of accurately labeled training data. This training data can be annotated manually (by humans), via machine annotation (annotated by other models or algorithms), or completely synthetic (ex: rendered images or simulated telemetry). Each approach has its pros and cons, and they can be combined as needed.

Unsupervised Learning:

Unlike supervised learning, where a teacher explains a concept or defines an object, unsupervised learning gives the machine the latitude to develop understanding on its own. Often with unsupervised learning, the machines can find trends and patterns that a person would otherwise miss. Frequently these correlations elude common human intuition and can be described as non-semantic. For this reason, the term “black box” is commonly applied to such models, such as the awe-inspiring GPT-3.

With unsupervised learning, we give data to the machine learning model that is unlabeled and unstructured. The computer then identifies clusters of similar data or patterns in the data. The computer might not find the same patterns or clusters that we expected, as it learns to recognize the clusters and patterns on its own. In many cases, being unrestricted by our preconceived notions can reveal unexpected results and opportunities.   

Reinforcement Learning:

Reinforcement learning teaches a machine to act in a semi-supervised approach. The machines are rewarded for correct answers, and the machine wants to be rewarded as much as possible. Reinforcement learning is an efficient way to train a machine to learn a complicated task, such as playing video games or teaching a legged robot to walk.

The machine is motivated to be rewarded, but the machine doesn’t share the operator’s goals. So if the machine can find a way to “game the system” and get more reward at the cost of accuracy, it will greedily do so. Just as machines can find patterns that humans miss in unsupervised learning, machines can also find missed patterns in reinforcement learning, and exploit those invisible patterns to receive additional reinforcement. This is why your experiment needs to be airtight to minimize exploitation by the machines.

For example, an AI twitterbot that was trained with reinforcement learning was rewarded for maximizing engagement. The twitterbot learned that engagement was extremely high when it posted about Hitler.

This machine behavior isn’t always a problem – for example reinforcement learning helps machines find bugs in video games that can be exploited if they aren’t resolved.

Datasets:

Machine Learning implies that you have data to learn from. The quality and quantity of your training data has a lot to do with how well your algorithm can perform. A training dataset typically consists of samples, or observations. Each training sample can be an image, audio clip, text snippet, sequence of historical records, or any other type of structured data. Depending on which machine learning approach you take, each sample may also include annotations (correct outputs / solutions) that are used to teach the model and verify the results. Training datasets are commonly split into groups where the model only trains on a sub-set of all available data. This allows a portion of the dataset to be used for validation of the model, to ensure that the model has generalized enough data to perform well on data it has not seen before.

Regardless of which training approach you take, your model can be prone to bias which may be inadvertently introduced through unbalanced training data, or selection of the wrong inputs. One example is an AI criminal risk assessment tool used by courts to evaluate how likely a defendant is to reoffend based on their profile as input. Because the model was trained on historical data, which included years of disproportionate targeting by law enforcement of low-income and minority groups, the resulting model produced higher risk scores for low-income and minority individuals. It is important to remember that most machine learning models pick up on statistical correlations, and not necessarily causations.

Therefore, it is highly desirable to have a large and balanced training dataset for your algorithm, which is not always readily available or easy to obtain. This is a task which may initially be overlooked by businesses excited to apply machine learning to their use cases. Dataset acquisition is as important as the model architecture itself.

One way to ensure that the training dataset is balanced is through Design of Experiments (DOE) approach, where controlled experiments are planned and analyzed to evaluate the factors which control the value of an output parameter or group of parameters. DOE allows for multiple input factors to be manipulated, determining their effect on the model’s response. Thus, giving us the ability to exclude certain inputs which may lead to biased results, as well as gain a better understanding of the complex interactions that occur inside the model.

Here are three examples of how training data is collected, and in some cases generated:

  1. Human Labeled Data:

What we refer to human labeled data is anything that has been annotated by a living human, either through crowdsourcing or by querying a database and organizing the dataset. An example of this could be annotating facial landmarks around the eyes, nose, and mouth. These annotations are pretty good, but in certain instances can be imprecise. For example, the definition of “the tip of the nose” can be interpreted differently by different humans who are tasked with labeling the dataset. Even simple tasks, like drawing a bounding box around apples in photos can have “noise” because the bounding box may have more or less padding, may be slightly off center, and so on.

Human labeled data is a great start if you have it. But hiring human annotators can be expensive and prone to error. Various services and tools exist, from AWS SageMaker GroundTruth to several startups which make the labeling job easier for the annotators, and also connect annotation vendors with clients.

It might be possible to find an existing dataset in the public domain. In an example with facial landmarks, we have WFLW, iBUG, and other publicly available datasets which are perfectly suitable for training. Many have licenses that allow commercial use. It’s a good idea to research whether someone has already produced a dataset that fits your needs, and it might be worth paying for a small dataset to bootstrap your learning process.

2. Machine Annotation:

In plain terms, machine annotation is where you take an existing algorithm or build a new algorithm to add annotations to your raw data automatically. It sounds like a chicken and egg situation, but it’s more feasible than it initially seems.

For example, you might already have a partially labeled dataset. Let’s imagine you are labeling flowers in bouquet photos, and you want to identify each flower. Maybe you had some portion of these images already annotated with tulips, sunflowers, and daffodils. But there are still images in the training dataset that contain tulips which have not been annotated, and new images keep coming in from your photographers.

So, what can you do? In this case, you can take all the existing images where the tulips have already been annotated and train a simple tulip-only detector model. Once this model reaches sufficient accuracy, you can fill in the remaining missing tulip annotations automatically. You can keep doing this for the other flowers. In fact, you can crowdsource humans to annotate just a small batch of images with a specific new flower, and that should be enough to build a dedicated detector that can machine-annotate your remaining samples. In this way, you save time and money by not having humans annotate every single image in your training set or every new raw image that comes in. The resulting dataset can be used to train a more complete production-grade detector, which can detect all the different types of flowers. Machine annotation also gives you the ability to continue improving your production model by continuously and automatically annotating new raw data as it arrives. This achieves a closed-loop continuous training and improvement cycle.

Another example is where you have incompatible annotations. For example, you might want to detect 3D positions of rectangular boxes from webcam images, but all you have are 2D landmarks for the visible box corners. How do you estimate and annotate the occluded corners of each box, let alone figure out their position in 3D space? Well, you can use a Principal Component Analysis (PCA) morphable model of a box and fit it to 2D landmarks, then de-project the detected 3D shape into 3D space using camera intrinsics . This gives you full 3D annotations, including the occluded corners. Now you can train a model that does not require PCA fitting.

In many cases you can put together a conventional deterministic algorithm to annotate your images. Sure, such algorithms might be too slow to run in real-time, but that’s not the point. The point is to label your raw data so you can train a model, which can be inferenced in milliseconds.

Machine annotation is an excellent choice to build up a huge training dataset quickly, especially if your data is already partially labeled. However, just like with human annotations, machine annotation can introduce errors and noise. Carefully consider which annotations should be thrown out based on a confidence metric or some human review, for example. Even if you include a few bad samples, the model will likely generalize successfully with a large enough training set, and bad samples can be filtered out over time.

3. Synthetic Data

With synthetic data, machines are trained on renderings or in hyper-realistic simulations – think of a video game of a city commute, for example. For Computer Vision applications, a lot of synthetic data is produced via rendering, whether you are rendering people, cars, entire scenes, or individual objects. Rendered 3D objects can be placed in a variety of simulated environments to approximate the desired use case. We’re not limited to renderings either, as it is possible to produce synthetic data for numeric simulations where the behavior of individual variables is well known. For example, modeling fluid dynamics or nuclear fusion is extremely computationally intensive, but the rules are well understood – they are the laws of physics. So, if we want to approximate fluid dynamics or plasma interactions quickly, we might first produce simulated data using classical computing, then feed this data into a machine learning model to speed up prediction via ML inference.

There are vast examples of commercial applications of synthetic data. For example, what if we needed to annotate the purchase receipts for a global retailer, starting with unprocessed scans of paper receipts? Without any existing metadata, we would need humans to manually review and annotate thousands of receipt images to assess buyer intentions and semantic meaning. With a synthetic data generator, we can parameterize the variations of a receipt and accurately render them to produce synthetic images with full annotations. If we find that our model is not performing well under a particular scenario, we can just render more samples as needed to fill in the gaps and re-train.

Another real-world example is in manufacturing where “pick-and-place” robots use computer vision on an assembly line to pack or arrange and assemble products and components. Synthetic data can be applied in this scenario because we can use the same 3D models that were used to create injection molds of the various components to make renderings as training samples that teach the machines. You can easily render thousands of variations of such objects being flipped and rotated, as well as simulate different lighting conditions. The synthetic annotations will always be 100% precise.

Aside from rendering, another approach is to use Generative Adversarial Network (GAN) generated imagery to create variation in the dataset. Training GAN models usually requires a decent number of raw samples. With a fully trained GAN autoencoder it is possible to explore the latent space and tweak parameters to create additional variation. Although it’s more complex than classical rendering engines, GANs are gaining steam and have their place in the synthetic data generation realm. Just look at these generated portraits of fake cats! 

Choosing the right approach:

Machine learning is on the rise across industries and in businesses of all sizes. Depending on the type of data, the quantity, and how it is stored and structured, Valence can recommend a path forward which might use a combination of the data generation and training approaches outlined in this post. The order in which these approaches are applied varies by project, and boils down to roughly four phases:

  1. Bootstrapping your training process. This includes gathering or generating initial training data and developing a model architecture and training approach. Some statistical analysis (DOE) may be involved to determine the best inputs to produce the desired outputs and predictions.
  2. Building out the training infrastructure. Access to Graphics Processing Unit (GPU) compute in the cloud can be expensive. While some models can be trained on local hardware at the beginning of the project, long-term a scalable and serverless training infrastructure and proper ML experiment lifecycle management strategy is desirable.
  3. Running experiments. In this phase we begin training the model, adjusting the dataset, experimenting with the model architecture and hyperparameters. We will collect lots of experiment metrics to gauge improvement.
  4. Inference infrastructure. This includes integrating the trained model into your system and putting it to work. This can be cloud-based inference, in which case we’ll pick the best serverless approach that minimizes cloud expenses while maximizing throughput and stability. It might also be edge inference, in which case we may need to optimize the model to run on a low-powered edge CPU, GPU, TPU, VPU, FPGA, or a combination of thereof. 

What I wish every reader understood is that these models are simple in their sophistication. There is a discovery process at the onset of every project where we identify the training data needs and which model architecture and training approach will get the desired result. It sounds relatively straight forward to unleash a neural network on a large amount of data, but there are many details to consider when setting up Machine Learning workflows. Just like real-world physical research, Machine Learning requires us to up a “digital lab” which contains the necessary tools and raw materials to investigate hypotheses and evaluate outcomes – which is why we call AI training runs “experiments”. Machine Learning has such an array of truly incredible applications that there is likely a place for it in your organization as part of your digital journey.

Additional resources:

A New Approach to QA

A New Approach to Quality Assurance

Quality Assurance

There’s no single path that can bring someone into a career in technology, and Quality Assurance is a common entry point throughout the tech industry, including at Valence.

Someone in QA is responsible for improving software development processes and preventing defects in production. The truth is that the industry hasn’t done a great job of making QA a great job. It’s common to hear stories about long thankless hours, short notices, disorganized processes, and the after-thought QA engineers get to the process. We’ve worked hard to get QA right at Valence.

We have built QA into our agile engineering processes, which is as good for our employees as it is for our clients. What we do with QA is common for Agile but for the gaming industry It’s not typical process, structure, or employment path, so we have outlined a few of the things that make our QA program unique at Valence.

The driving principle of our QA program is that we recognize that our QA engineers are vital members of a cross-functional team. We are invested in each QA engineer and their career path, making mutual long-term commitments. Valence is in the software industry, where QA engineers can enjoy greater career longevity, professional growth, and be ahead of the curve with latest and greatest technologies.

A typical day in QA at Valence 

“My main objective is to deliver a quality product to our customers. We test products from CX perspectives and from functionality perspectives to ensure best-in-class experiences on all platforms like web browsers, mobile, and tablets.”

Raanadil Shaikh, Quality Assurance Lead

Our QA team works on technology-driven projects, testing features across multiple platforms, focusing on the end-customer and their experiences.  

Valence’s teams follow a standard agile scrum practice. We involve the QA engineers as we are starting and finishing sprints so we can include the QA team’s needs and expectations in our planning. Like many technology firms, our QA team uses pull-requests to trigger testing. The testing person runs test cases as defined in the sprint planning. If the feature meets the acceptance criteria, they’ll be validated.  

One of the features of the QA process that makes it so effective is that it is flexible.

“The QAs at Valence have a lot of flexibility to create the structure and testing that is right for the project. In previous QA roles, I’ve had to follow a very strict prescribed process, even if it wasn’t right for the project. That’s not the case here.”

Emily Bright, QA Analyst

Bright adds, “QA is heavily impacted by the people you work with. Our job is to find issues with the work that other people do. Everyone at Valence is very open and accepting of the QA team’s input, and they tend to presume that the QA engineer is correct, which is really nice.” 

What’s also unique is that we share tools across teams as much as possible. Our QA team uses the same tools as the rest of the Engineering team. Our QA team is empowered to work with different versions of code (using the GIT command line) and deploying builds to their local machine/cloud stack. Our QA team doesn’t write code, but they interact with code. This isn’t as common a practice at other firms as it should be. 

Automation is a big part of QA at Valence, which dramatically improves the work experience of our team. We use automated testing, so our QA engineers don’t have to repeatedly run the same tedious processes, which is exhausting and uninspiring. Thanks to automated Acceptance Tests, we get to focus our QA engineers’ attention on the more interesting aspects of the project, the new features, the Ad Hoc, and the end-to-end customer experience. It’s a big part of the reason that our QA team is happier than most.  

“Monotony is the biggest risk of many other common QA roles, but that’s not the case at Valence because of the variety of projects and technologies we use,” according to Jaison Wattula, who oversees Valence’s QA program. Shaikh agrees, “I’m challenged every day because I’m not limited to one product for a long time – I’m testing the latest products on different platforms, which is really cool.” 

Additionally, Valence QA engineers are often front and center with the client, our developers, and their peers. Our QA team touches every feature and needs to understand the project goals, development principles, and approach as well as any other member of the team. Since they are the first line of defense against bugs and errors, the project goes better when the QA engineers collaborate with cross-functional teams (including clients) and participate in decision-making.  

“Valence is a special place because there is a real appreciation for diverse perspectives and viewpoints – I love collaborating with coworkers and clients to find the right innovation for a project.”

Raanadil Shaikh, Quality Assurance Lead

Bright adds, “When I find issues (which is the fun part), I usually either come up with a solution, offer a workaround, or find another way to help the team solve the bug. This is faster, more collaborative, and an important part of my contribution.”  

While we have typical days, we don’t have typical projects. Our QA team needs to be comfortable interacting with new and emerging technologies. Valence has a varied roster of services and technologies, and our QA team interacts with all of it.  

Who thrives in QA at Valence? 

Our team is successful because the people here are passionate tech enthusiasts who are detail-oriented, curious, and want to contribute to the whole development process.  

Valence has an Always Learning culture, and this is particularly true of the QA team. People who love learning new technologies, platforms, tools, and best practices thrive here. “There are never ending learning possibilities,” says Shaikh. 

Ambitious tech enthusiasts who want to use QA as a steppingstone to other parts of the industry do well here. While it’s rare to transition from QA to code writing, the QA engineer role is a clear incubator role to grow into other technology positions and expand skills. Because our QA engineers are exposed to the process and every role within that process, they are uniquely positioned to choose their next career step within Valence. QA engineers can grow into project coordination, project management, dashboarding and visualization, and more. It’s the right place to start if you want to grow into the non-code and more abstract technology roles where you guide and support client projects.  

“We work hard to hire the right people for our QA team, and even with all that effort, nothing makes me happier than seeing a member of our QA team get promoted into other areas of the business.”

Jaison Wattula, Director of Reporting and Automation

Does this sound like you? We’re hiring! You can find the Quality Assurance Lead job description here. 

Technology is Your Untapped Weapon in the Talent War

Technology is Your Untapped Weapon in the Talent War

By Sarah Hansen

The demand for skilled labor and the rigors needed to keep employees engaged and committed in this highly competitive market are well-documented. CEOs and CHROs have been talking about the talent war since before our company was founded. And based on recent analysis, company leaders are showing some serious battle scars as the demand for talent is beginning to overtake COVID response as a top concern for leadership.

Technology has a central role to play in the talent strategy for organizations.
Technology has a central role to play in the talent strategy for organizations.

These concerns are compounded by signs of a potential hangover from the COVID pandemic, which is a “resignation boom” or “turnover tsunami”. A March 2021 survey by Prudential found that 1 in 4 workers are thinking about resigning, whether it’s to seek adventure, recharge, or because they are rethinking life choices.

The mixed messages about job reports, unemployment, and recruiting challenges can be a lot to take in. One truth amid these trends is that technology has a central role to play in the talent strategy for organizations.

There are several reasons that digital transformation needs to be part of your business strategy, and recruiting is quickly moving to the top of the list.

According to a study by Monster, 82% of employers are confident that they will be ready to ramp up hiring this year, and 40% of respondents are filling new roles on top of the vacancies created by the Pandemic economy. At Valence, we are certainly among them — as VP of People and Talent, I am experiencing the intense hiring environment first-hand.

Here are a few of my key takeaways as I go back into battle in this talent war:

Interviewing/Recruiting

The upside to remote recruiting is that we can more easily schedule interview panels so that candidates can meet more of the team in a shorter period. This allows greater scheduling flexibility, and more importantly, it gives people greater access to each other. If your recruiting strategy includes pursuing passive candidates, remote interviewing also makes it easier for candidates to schedule a conversation without disrupting their commitments to their current employer. Obviously, we are excited about the role that virtual reality could play in the recruiting process, but there are also some mainstream technologies to support remote recruiting, such as:

  • Video conferencing
  • Scheduling apps
  • Recruitment marketing automation
  • Mobile-first recruiting assets
  • Applicant tracking systems
  • Skills assessment platforms

Remote Work, Culture, and Retention

This issue is close to my heart — we work hard to find the best talent, and we’ve curated a company culture that is compelling, motivating, and engaging. How does that culture shift when we aren’t in the office together? The human-to-human connection starts by having the right people in the company to start with.

The number of remote workers increased by 140% between 2005 and 2020, and those numbers will only go up post-pandemic. Having the infrastructure in place to onboard, engage, and collaborate with remote workers increases the talent pool. And our internal employee satisfaction surveys support what many third-party studies are finding, which is that for certain workers, satisfaction is higher when remote work is an option. If you are filling technical positions, it’s also helpful to have remote accessible assessment tools to replace those live white boarding exercises from pre-pandemic recruiting. Anecdotal feedback has been that the candidates also find this less stressful, and they feel their performance is better reflected without the white-board time constraints. The essentials to support remote work and culture include:

  • Mobile-friendly platforms
  • Streamlined and consistent data infrastructure and document management
  • Internal social media
  • Video conferencing
  • Chat
  • Virtual white boards and collaboration platforms
  • Survey tools to check in on employee sentiment and satisfaction

Work is changing, business is evolving, and technology is at the center of everything. Even with the unknowns facing the business community in the next many months, there is no doubt that technology is going to be a significant piece of the puzzle.

And since I’ve got you here, we’re hiring! If you are interested in working with a responsible company with an amazing team at the intersection of innovation and inspiration, check out our careers page!

Additional Resources

Deep Learning at the Edge

Deep Learning at the Edge

By Yuri Brigance

What is deep learning?

Deep learning is a subset of machine learning where algorithms inspired by the human brain learn from large amounts of data. The machines use those algorithms to repeatedly perform a task so it can gradually improve outcomes. The process includes deep layers of analysis that enable progressive learning. Deep learning is part of the continuum of artificial intelligence and is resulting in breakthroughs in machine learning that are creative, exciting, and sometimes surprising. This graphic published by Nvidia.com provides a simplified explanation of deep learning’s place in the progression of artificial intelligence advancement.

Deep learning is part of the continuum of artificial intelligence.
Deep learning is part of the continuum of Artificial Intelligence.

Deep learning has an array of commercial uses. Here’s one example: You are a manufacturer and different departments on the factory floor communicate via work order forms. These are hand-written paper forms which are later manually typed into the operations management system (OMS). Without machine learning, you would hire and train people to perform manual data entry. That’s expensive and prone to error. A better option would be to scan your forms and use computers to perform optical character recognition (OCR). This allows your workers to continue using a process they are familiar with, while automatically extracting the relevant data and ingesting it into your OMS in real-time. With machine learning and deep learning, the cost is a fraction of a cent per image. Further, predictive analytics models can provide up-to-date feedback about process performance, efficiency, and bottlenecks, giving you significantly better visibility into your manufacturing operation. You might discover, as one of our customers did, that certain equipment is under-utilized for much of the day, which is just one example of the types of “low hanging fruit” efficiency improvements and cost savings enabled by ML.

What are the technologies and frameworks that enable Deep Learning?

While there are several technology solutions on the market to enable deep learning, the two that rise to the top for us right now are PyTorch and TensorFlow. They are equally popular, and currently dominate the marketplace. We’ve also taken a close look at Caffe and Keras, which are slightly less popular but still relevant alternatives. That said, I’m going to focus on PyTorch and TensorFlow because they are market leaders today. To be transparent, it’s not clear that they are leading the market because they are necessarily better than the other options. TensorFlow is a Google product, which means that TensorFlow benefits from Google’s market influence and integrated technologies. TensorFlow has a lot of cross-platform compatibility. It is well-supported on mobile, edge computing devices, and web browsers. Meanwhile, PyTorch is a Facebook product, with Facebook being significantly invested in machine learning. PyTorch was built as the native Python machine learning framework, and now includes C++ APIs, which gives it a market boost and feature parity with TensorFlow.

Deploying to the Edge

What about deploying your models to the edge? My experience with edge ML workloads started a while back when I needed to get a model running on a low-powered Raspberry Pi device. At the time the inference process used all the available CPU capacity and could only process data once every two seconds. Back in those days you only had a low-powered CPU and not that much RAM, so the models had to be pruned, quantized, and otherwise slimmed down at the expense of reducing prediction accuracy. Drivers and dependencies had to be installed for a specific CPU architecture, and the entire process was rather convoluted and time consuming.

These days the CPU isn’t the only edge compute device, nor the best suited to the task. Today we have edge GPUs (ex: NVIDIA Jetson), Tensor Processing Units (TPU), Vision Processing Units (VPU), and even Field-Programmable Gate Arrays (FPGA) which are capable of running ML workloads. With so many different architectures, and new ones coming out regularly, you wouldn’t want to engineer yourself into a corner and be locked to a specific piece of hardware which could become obsolete a year from now. This is where ONNX and OpenVINO come in. I should point out that Valence is a member of Intel’s Partner Alliance program and has extensive knowledge of OpenVINO.

ONNX Runtime is maintained by Microsoft. ONNX is akin to a “virtual machine” for your pre-trained models. One way to use ONNX is to train the model as you normally would, for example using TensorFlow or PyTorch. Conversion tools can convert the trained model weights to the ONNX format. ONNX supports a number of “execution providers” which are devices such as the CPU, GPU, TPU, VPU, and FPGA. The runtime intelligently splits up and parallelizes the execution of your model’s layers among the different processing units available. For example, if you have a multi-core CPU, the ONNX runtime may execute certain branches of your model in parallel using multiple cores. If you’ve added a Neural Compute Stick to your project, you can run parts of your model on that. This greatly speeds up inference!

OpenVINO is a free toolkit facilitating the optimization of a deep learning model from a framework and deployment using an inference engine onto compatible hardware. It has two versions including an opensource version and one that is supported by Intel. OpenVINO is an “execution provider” for ONNX, which means it allows you to deploy an ONNX model to any compatible device (even FPGA!) without writing platform-specific code or cross-compiling. Together ONNX and OpenVINO provide the ability to run any model on any combination of compute devices. It is now possible to deploy complex object detection, and even segmentation models on devices like humble webcams equipped with an inexpensive onboard VPU. Just like an octopus has multiple “brains” in its tentacles and head, your system can have multiple edge inference points without the need to execute all models on a single central node or stream raw data to the cloud.

Thanks to all these technologies, we can deploy machine learning models and deep learning programs on low powered devices, often without even requiring an internet connection. And even using these low powered devices, the deep learning projects are producing clever results where the machines learn to identify people, animals, and other dynamic and nuanced objects.

What should you do with this information?

This depends on who is reading this. If you are a business owner or operator who is curious about machine learning and deep learning, your key takeaway is that any data scientist you work with should have a mastery of these technologies.

If you are a data scientist, consider these technologies to be must-haves on your skill inventory.

Additional Resources

Let’s talk about Unified Data Governance (UDG)

Let’s talk about Unified Data Governance (UDG)

By Jim Darrin

Unified Data Governance (also known as United Data Governance), or UDG, describes the process of consolidating disparate data sources to create a single data narrative across the myriad data stores within an organization.

Technology is at the heart of every modern company, and data management is more than a side effect of a business. Rather, data is an asset and a risk factor that increases in importance as businesses grow and move along the arc of their digital maturation.

According to McKinsey, only a small fraction of companies effectively leverage data-informed decision-making strategies, yet those that are making data-informed decisions have outperformed competitors by 85% in sales growth and by more than 25% in gross margins. McKinsey also reported that in 2015 corporations paid $59 billion for US regulatory infractions, $59 billion that those corporations could have used for other purposes. ​

Data management is more than a side effect of a business
Harnessing vast amounts of data is critical to glean insights and meet business goals.

Unified data governance is not just critical for large companies — in fact, the earlier you are on your technology journey, the better positioned your business is to establish best practices and infrastructure that can scale into the future.

A unified data governance strategy will make sure that a business and its people can develop and deliver trusted data to the right users at the right time and in the right format. Being able to manage a business’s critical data assets can unleash opportunity within the business, reduce regulatory risk, improve business insights, and eliminate manual processes.

Why are we excited about UDG? Unified data governance lines up with Valence’s engineering and innovation strategy capabilities in perfect alignment. Analytics and reporting are in our DNA, and business-focused innovation is what gets us out of bed every day.

United data governance can break down data silos, improve data quality, lower data management costs, increase access for users, and reduce compliance costs and risks ​

Therefore, we’ve released a new service offering, Valence Unified Data Governance. We are bringing businesses into the data unification process, and currently see the potential for Microsoft Azure Purview to be a uniquely scalable and stable unified data governance technology. Valence is a Microsoft Gold Partner, and our relationship with Microsoft made it a no brainer for Valence to be among the first to market with an offering based on Purview. You can read the press release about this new offering here.

Should you be thinking about UDG?

While every modern business needs to address its data and governance, organizations in regulated industries are particularly prime for a UDG strategy. In addition to the common issues of manual data management, inconsistent reporting results, and disparate data sources, regulated industries have the added risk of compliance failures. Organizations in regulated industries are also likely to have high data volume, diverse data sources, data silos, ownership issues, incomplete data documentation, and data source fidelity. Regulated industries like law firms, healthcare organizations, and state/local governments have the most to gain by adopting UDG sooner rather than later.

Here’s what our own Steven Fiore thinks about UDG: “I’ve got years of experience working in state and local governments, and I know first-hand that smart and hardworking public servants are faced with tight budgets and challenging manual data management processes. Unified data governance is desperately needed in these organizations, and I feel personally excited to help people to find a better way.”

Here are four features of UDG that we are excited about:

  1. Unified data governance helps businesses understand what their sensitive data is, where it lives, and how it is being used.
  2. UDG also helps organizations understand what is and isn’t protected, compliance risks, and what their need is for additional safeguards such as encryption.
  3. UDG with Purview allows us to aggregate multiple data sources and connect certain types of information like social security and credit card numbers or employee IDs. With UDG, businesses can identify these different types of information and associate them with their data sources.
  4. One feature of UDG that seems so simple but can also be a game changer is that it also allows you to understand where the source data is in a report.

We work with clients to address their data and reporting using an array of technologies and techniques — the first step is to understand your data landscape, and then to develop a data governance roadmap. With the roadmap in place, your business can rapidly implement the UDG solution — and then experience the acceleration and opportunity that is made possible with a modern technical solution engineered and designed to better manage your data at scale.

Additional resources:

The Robots are Here — Are You Ready?

The Robots are Here — Are You Ready?

By Glenn Bowers

Businesses around the world are using robots to modernize operations.

Every company is a technology company. Innovation is what keeps companies nimble, progressive, and efficient. Robots have been the symbol of futurism for generations — and the truth is that the future is now. The robots are here.

Take healthcare, for example. Companies like OhmniLabs and Monogram are developing robots for everything from telepresence to robotic surgical assistants. The pandemic has also introduced a new wave of robotics for office and retail spaces. We are seeing more remote telepresence with robots roaming warehouses, checking barcodes, counting inventory, checking on patients, and even sanitizing workspaces.

A recent report from The Economist says that the spike in automation and robotics will stick around long after the pandemic is settled. The article leads with a story about GM using robotics and AI for autonomous electric pallets in warehouses — which highlights how robotics aren’t just for futuristic marketing sizzle — robotics are officially a part of mainstream logistics and operations.

According to this 2020 story in TechJury.net, 88% of businesses worldwide plan to adopt robotic automation into their infrastructure. This stat was established before the pandemic accelerated the adoption of robotics. We expect that the number of businesses looking to robotics and automation has only increased, as long as the supply chain can keep up.

Aaron Campbell, who leads strategic partnerships for Ohmnilabs, summed up the future of robotics thusly, “In virtually every major industry, I’ve seen first-hand how the pandemic catalyzed the adoption of emerging technologies and our robots, in particular. From Fortune 500 companies to government institutions, one message is resoundingly clear: robots have a place in nearly every type of organization.” Valence partners with companies such as Ohmnilabs founded on our shared interest in emerging technologies and innovative user experiences.

Robotics will undoubtedly be one of the most important strategies that leaders and organizations in congested spaces use to expand market share, increase profitability, and displace competition.

At the end of the day, digital transformation is a requisite part of creating a competitive advantage, and, ultimately, survival. We’ve taken note of this and created an Enterprise Pilot Program to make it easy for interested leaders to start the process.

Will robotics fit into your business?

If 88% of businesses have determined that robotics fit into their business, there’s a good chance that it could fit into yours. If you are curious about where robotics or Robotics Process Automation (RPA) could be applied to your business, look at your work processes and workflows to find the interactions and transactions that could be automated, freeing up your human resources for higher value inputs.

As the price of sensors and chips continues to decrease, driven largely by advances within the mobile computing market, there has been a dramatic drop in the cost of robots. In certain cases, robots like the

OhmniLabs telepresence robot can be purchased for less than $5,000. This makes it much easier and more affordable for companies to invest in a robotics project.

How to kick off and scale a robotics program

Start with a pilot program. Pilot programs are a smart way to ease into a new technology. A small-scale pilot program could provide your company with a way to explore the potential impact of a new technology before making a significant investment. The goal of a pilot program is to understand what is feasible. It creates a safe and controlled environment to test logistics, assess value, and reveal deficiencies before deploying the technology at a large scale.

If this is the first time that your company is investing in a robotics pilot program, we recommend that you start by automating processes that are less complex in nature. This will increase the likelihood that your initial pilot program will succeed. Once you’ve identified the objectives of a robotics pilot program, your team can document the steps to test the concept and establish metrics to evaluate the success of the pilot. Because the driving goal of a pilot program is to learn about what is possible while the stakes are controlled, most pilot programs achieve some degree of success. You can’t go wrong if you are learning.

Following the pilot, you need to understand how what is required to scale the program up. When scaling up a pilot program to full production release, fewer than 15% of pilot programs achieve the desired ROI. This low success rate is largely due to the lack of a holistic approach to production scaling rather than because of the capabilities of the technology. To improve your chances of achieving the desired ROI, there are a few things you should consider when scaling from a small pilot to an enterprise production release.

  1. Depending on the complexity of your implementation, you may need to work with multiple vendors, each with its own hardware/sensor extensions. Those hardware/software extensions will each need to integrate with your company’s platforms and other hardware.
  2. Be realistic about the time and effort required to integrate robotic devices. Due to a lack of industry standards, each of these integrations will likely require some form of custom implementation. The resources needed to develop, deploy, and manage a production robotic fleet is often underestimated.
  3. The work doesn’t stop once your solution has been deployed across your enterprise. There will be a need for maintenance and updates to your robots and ancillary hardware and sensors. Account for the ongoing resources needed to maintain your implementation.

A secret weapon: The Innovation Scale Roadmap

Our work with clients across industries has revealed that the secret to a successful deployment of an emerging technology from pilot to production is the Innovation Scale Roadmap (ISR). An ISR can help you to develop a strategy that pulls internal stakeholders together, supports the development and implementation of new technology, and prepares the company for innovation.

A quality ISR will document and plan for the success criteria for technical and implementation measures. The ISR will guide the evaluation of the pilot program and help the company determine whether or not they are ready to scale from pilot to production. In addition to the value that the ISR brings as a decision-making tool, it is also critical as a strategic roadmap that can be referenced throughout deployment. The ISR should include a detailed deployment plan that addresses the desired rate of scaling into production along with the ongoing development and support needs for the program.

Want to learn more? Contact us, and we can talk. You can also check out this roundup of robotics at the CES virtual tradeshow, where Katie Collins breaks down many of the robotics solutions that caught her eye.

5G and the Next Decade of Digital Transformation

5G and the Next Decade of Digital Transformation

By: Jim Darrin, CEO

I am excited to announce today that Valence has joined the 5G Open Innovation Lab (5G OI Lab) ecosystem as a Technical Partner. This is an incredibly important milestone for the company given our focus on the next decade of digital transformation. Since the beginning, Valence has operated under the belief that yes, “software is eating the world” and that every company, enterprise, non-profit — you name it — will both feel the impact but more importantly be able to embrace this ongoing transformation. 5G and digital transformation go hand-in-hand.

Our three-prong thesis is simple: the trend of (1) increasingly capable cloud software platforms from Google, Microsoft, Amazon and more plus (2) improved and lower cost hardware platforms from robotics companies, VR headsets (we see you Oculus!), and more plus (3) always-on, high-speed internet access to every part of the physical world will create a cocktail of innovation opportunities like we have not seen before. And 5G is a critical ingredient. This is where we play as Valence: in the middle of that mix is enormous opportunities for companies to build solutions, think of new business models, improve user experiences and more.

We have worked with T-Mobile for years and were thrilled earlier this year to hear about the 5G OI Lab. Founded by T-Mobile, Intel, and NASA, the 5G OI Lab is set up to be a global ecosystem of developers, start-ups, enterprises, academia and government institutions that bring engineering, technology and industry resources together. Valence will now be a key part of this Lab as a Technical Partner to help companies think through and execute on projects and programs that take full advantage of 5G technologies and capabilities. And we couldn’t be happier about it.

For more information, contact us. We’d be happy to talk digital transformation anytime!