November 17, 2024

hopeforharmonie

Step Into The Technology

Selecting an Assessment Tool – 5 Business Principles Vital to Your Success

Executive Summary

The goal within most organizations is to hire a happy, productive workforce that stays on the job longer and produces more. That simple mission is often very hard to execute without an HR tool that is proven to predict a candidate’s on-the-job performance and tenure. Volumes of research show that an assessment technology-when positioned and deployed correctly-will reduce turnover and improve productivity while creating a reservoir of objective performance data designed to identify prospective employees who are good fits in specific job roles.

To fulfill the mission of hiring a productive workforce that stays on the job longer and produces more, assessment technology has become a mission critical component for organizations. With the right assessment technology, your company should have the means to identify, develop, and retain a highly productive workforce, which is one of the vital ingredients to business success.

I want to share with you lessons I’ve learned over the last decade on how to most effectively select, deploy, and study the effectiveness of an assessment technology solution. Equipped with these five principles, you possess the fundamental components that must be top-of-mind when purchasing an assessment technology solution.

The Principles

Principle #1: An assessment technology should be…

Proven to predict employee performance.

Assessment technologies are designed to assist organizations in identifying candidates who will be successful on the job. To determine which assessment can best meet your organization’s needs, you must be convinced of the system’s ability to predict performance. From an objective, scientific perspective, performance predictability of an assessment solution is most often documented through two concepts: reliability and validity.

Reliability-Only Part of the Equation

I met a good friend of mine at a golf course in West Texas many years ago. Our plan was to enjoy a round or two and catch up on old times. However, due to a high volume of golfers waiting in line, the course officials paired us up with two “local boys” (that’s a Texanism for two grown men you don’t know).

I was the last to tee off after watching my friend and the two local boys really set the pace by crushing their drives. Embarrassingly, I “topped” the ball, meaning I barely caught enough of the ball to send it gently skipping down the middle of the fairway about fifty yards from the tee box.

As golf etiquette would have it, the player furthest from the hole must hit the next stroke. As I took a couple of practice swings, I noticed the two local boys waiting in front and just to the right of my position on the fairway.

In a neighborly fashion, I called out, “Hey, you boys might want to move. I have a nasty slice.” (My ball always curls off to the right.) One of the two nonchalantly called back, “Aw, don’t worry, you won’t hit us!” Not wanting to disrupt the flow of the game, I warily continued to line up my shot. I tightened my grip on the club, took one more practice swing, and then let it rip.

It really was a beautiful shot-featuring my standard beautiful slice in all its glory. The ball curved so fast I did not have time to yell “fore.” Before I knew it, the ball whistled straight at the local boys and struck one with a loud thud! (I suppose he was fortunate-the ball struck that padded area between the hamstrings and the lower back.) The golfer with the smarting backside shrieked so loudly that everyone on the course felt his pain.

The ever-present slice in my golf swing provides the perfect illustration of the concept of reliability in an assessment technology.

In golf, I reliably slice the ball to the right side of the course every time; you can count on it, and, unfortunately, the local boys did not heed the warning. To relate this to assessment terms, anytime you assess someone, you want to receive a reliable result. The reliability of an assessment focuses on the consistency of the responses, but not the accuracy. In practical terms, an assessment that asks several similar questions-using slightly different words-would yield similar answers. Put another way, if a person took an assessment, then took it again later, the results should be very similar. By contrast, if you receive a wide variety of responses, you would likely determine that the measure is not reliable.

The statistical reliability of an assessment is measured in several different ways. It would take a lengthy white paper to cover this topic to my satisfaction, but, in simple terms, a rule of thumb for a behavioral assessment instrument is to achieve reliability of.7 to.8. This range will vary due to the type of assessment that was used. I would encourage you to not only ask about the reliability of any assessment technology, but also the background data that defines how that number was generated.

It is important to remember that reliability is only part of the equation. Without validity, you will not have a full picture of the assessment’s effectiveness. For example, to better understand the actual success of my golf game (or lack thereof), we need to analyze my validity to determine how accurately I can hit the ball in the hole. (At least I am reliable…one out of two isn’t bad.)

Validity-Does the Assessment Work?

Validity answers a very different question. Does it work? In the game of golf, the number of strokes to complete a round of golf provides a validity estimate of a player’s golfing abilities. It is important to understand that one round of golf at one golf course does not provide an accurate representation of one’s golfing ability. Golfers attain different scores depending on the course played, weather, type of course, difficulty of the course, the number of holes played, the number of strokes required to make par, etc. It is not one round, but the body of evidence collected over time that provides the validity of a player’s golf game.

This concept translates nicely to assessment validity. When evaluating the validity of an assessment technology, you should focus your evaluation efforts on the volume of studies, types of roles, and the sample sizes of the various studies. Generally, assessments should deliver a validity coefficient in the neighborhood of.2 to.4. Like reliability, but even more so, the range of the validity coefficient may vary due to the context of the study, sample sizes, length of study, etc. Dig into the reported validity coefficient as well as the supporting documentation that details the study process.

Collectively, discussions around reliability and validity should provide you with the confidence you need to narrow the choices of possible assessment technologies for your organization.

Principle #2: An assessment technology should be…

The catalyst to continuous workforce improvement.

To stay competitive, every company should desire to see continuous improvement in the workforce. The advantages that an organization gains through the pursuit of continuous improvement are numerous: more productive workers, better process efficiencies, lower overall expenses, and higher revenues, to name a few. The key to that kind of long-lasting improvement lies in bettering the performance of every member of the organization. After all, individuals make up teams, teams make up departments, departments comprise company divisions, and divisions form corporations. Individual performers are the building blocks of the entire structure.

Often the key role that individual performers play in creating a culture of continuous improvement is overlooked.

Traditionally, companies are very good at monitoring and tracking performance of the masses at the company, regional, and group levels. However, those same organizations often miss the mark when it comes to tracking and monitoring performance at the individual level. Without solid tracking of individual job performance, companies are unable to evaluate performance on the front lines where it actually occurs: at the individual level.

As part of your evaluation of assessment technologies, look for processes that rely heavily, if not solely, on objective performance metrics to document the effectiveness of individuals in the workforce. Individual performance numbers will not only define “success” in your company and culture, but also serve to link behaviors to performance when a behavioral assessment tool is introduced into the hiring procedure.

This is how your assessment technology can become the catalyst for continuous workforce improvement. If positioned properly, the assessment software will be a crucial collection point of individual behaviors-and related performance metrics-that dictate what great performers look like in specific jobs.

To derive the best results from an assessment technology, it is important to understand performance in terms of data at the individual level. Understanding individual performance will provide you with a clear performance picture surrounding the objectives and desired outcomes for a position. The clearer the performance picture, the more equipped you are to accurately capture the behaviors and skills needed for success.

By installing an assessment technology, your organization’s maintenance will include reevaluating the clarity of performance data on a continual basis in order to improve the behavioral/skill capture. In this process, it is commonplace for companies to focus on higher quality individual performance metrics to better leverage their assessment technology. This effect will automatically raise the bar in terms of selection, training, development, and employee productivity across any position where an assessment technology is deployed.

In summary, focusing on detailed, objective performance data collection methods will inevitably lead to a better capture of behaviors and skills. A better data capture through an assessment technology leads to the accumulation of workers who are more aligned with desired business performance goals. Eventually, one component improves the other, fueling an ongoing cycle of continuous improvement.]

Principle #3: An assessment technology should be…

Focused on fit; more is not always best.

Have you heard the saying, “More is better”? In the game of golf, you have a variety of golf clubs designed for different situations. Some clubs are for driving the ball great distances down the fairway, while other clubs are used for shorter shots such as chipping or putting. Imagine how your golf game would suffer if you believed that the bigger club was always better. On a par three hole, you may overshoot the green with one swing. Even worse, once you make it to the green, you will struggle putting the ball in the hole using your driver. At that point, the bigger club actually hurts your ability to maneuver the ball where you want it to go, which is in the hole. By that logic, more is not always better.

The same concept applies when it comes to using an assessment. Typically, assessments measure a collection of characteristics (referred to as factors, dimensions, etc.). Many people assume-incorrectly- that it is always better to be on the higher side of a characteristic (the More is Better Syndrome).

Let’s consider the implications of this thought process. Is being smarter always better? What about filling a mundane job vacancy? How long would a brilliant person stay in a non-thinking, repetitive job? Is being highly sociable a great characteristic for every job? Consider an isolated role where interaction with others is detrimental to good performance. Would a person who thrives on socializing enjoy, or be driven to success, in this type of role?

Of course, I’m exaggerating these scenarios to drive home the point: it is important to avoid the mistake of assuming more is always better. The key to fully utilizing the power of the assessment is to find just the right amount of many characteristics to predict future success in a specific role.

By fine-tuning the subtle shades of each assessment characteristic to best describe your strongest performers, you will be better equipped to maximize the predictive power of your assessment tool. Again, great caution should be taken if your objective is to only use assessment characteristics in the context of “more is better.” That method of evaluation often leads to selection tactics based on incorrect assumptions. Additionally, you will effectively dismiss a large amount of hidden insight that will increase your predictive power to identify your future top performers who will stay in position longer.

Keep in mind that most assessment technologies are built according to the assumption that more is better. Your evaluation of assessment technologies should only include systems that measure a large group of behavioral characteristics; moreover, the system must offer flexibility in specifying the optimal amount of each characteristic an ideal candidate would possess to succeed in the target job.

Principle #4: An assessment technology should be…

More than just a score.

When selecting an assessment technology, it is important that the usefulness of the assessment goes far beyond a simple score or rating of the candidate. Overall scores are helpful when sorting and sifting candidates and narrowing the field, but the real value comes when you dig deeper and fully leverage all the rich information gathered from the assessment. Specifically, you should be able to apply the assessment information to areas such as enhancing the interview, on-boarding, determining future career paths, and developing employees over the long term.

Enhanced Interviews

Beyond providing a score, information gained from the assessment should improve your interview process. A quality assessment can effectively produce targeted interview questions designed to facilitate discussion around the specifics of a position. These targeted interview questions also provide a means to ensure consistency in your interviewing process regardless of the size or geography of your organization. Additionally, by using the targeted interview questions, you will maximize your time with the candidate. At a minimum, you will have a better understanding of the strengths and opportunities revealed by the assessment in relation to a specific position.

On-Boarding

On-boarding is the process of getting a new hire officially authorized for his or her first day on the job. This hiring phase includes the completion of various governmental and proprietary forms, plus any other paperwork required by the hiring company. To expedite this procedure, an assessment technology will typically be integrated with the company’s Human Resource Information System (HRIS) to pass on all relevant data previously collected on the candidate. In essence, the assessment platform should “fill in the blanks” required on electronic forms in the HRIS database through a transfer of information from the candidate’s original application. Without this integration (more on integrations in the next section), on-boarding remains a manual process and any potential efficiencies that could be driven from the assessment technology are negated. Direct your evaluation of assessment technologies to only those systems with proven integration success with common HRIS technologies.

Career Pathing

Future career paths are another area where an assessment technology should allow you to go beyond a score. In companies with an eye to the future, the selection strategy is to hire not only for the immediate need, but also determine each employee’s viability for future positions. For example, if you are tasked with hiring an assistant manager, you may also be interested in a candidate’s potential to be a manager at some point down the road. Your assessment technology should provide you with the insight to understand and evaluate the potential for candidates to move into other positions, and not just the job for which they applied.

Employee Coaching and Development

Companies are often asked to do more work with fewer people on the payroll. Therefore, coaching and employee development programs have become an area of emphasis in most organizations. Consider future coaching tools as an integral part of the assessment technology purchase. The assessment process captures a wealth of data, which should be used throughout the life cycle of an employee. By scientifically examining the relationships between performance data and assessment characteristic scores, the assessment technology provides specific, detailed developmental targets to support continued growth of the assessed individual.

One of the biggest hindrances to creating a quality coaching and development program is finding specific content statistically related to performance on the job. Assessment technology provides the perfect vehicle to supply accurate, job-related content for training in the current position, as well as in future positions.

Principle #5: An assessment technology should be…

A tool that makes your organization better.

Although this principle serves as number five, it fits the old adage, “Last but not least.” Central to any new purchase or program decision is the need to determine how your organization will ultimately define value. A great approach to this question is to ask, “How will this assessment technology make us better?” You will find that value comes in many forms; each organization has a unique focus that is proven to breed success. Three universal ways in which an assessment technology can better an organization are:

  • Better processes.
  • Better retention.
  • Better performance.

Better Process

The primary function of an assessment technology is to address the fundamental challenge of identifying candidates who produce more and stay longer on the job. In fulfilling that primary function, your assessment technology should not hinder your overall HR process, but in fact should streamline the hiring workflow. This is most often accomplished through integrations with existing software systems designed to manage the flow of information as candidates move from their initial applications to their first day on the job.

The advent of applicant tracking software (ATS) allowed companies to manage the data generated during the hiring process. ATS tools-not to be confused with assessment technology-were designed only to collect, organize, and move candidates through the HR process. In other words, they simply manage bits of information. Some applicant tracking tools provide a few features such as pre-screens or light assessment functionality, but the central focus is on organizing information. These features are handy, but secondary, to the primary objective of hiring the right fit for the job.

To enjoy the functionality of assessment technology and an ATS, one business option is to select an assessment technology that can co-exist side by side with an ATS. However, this arrangement isn’t a requirement. Quality assessment technology now provides features to categorize and sort people, collect resumes, store applications, provide detailed reports, and do many other practical tasks to manage your peopleflow-the path every candidate takes from the “Apply Now” portal to the final hire/no hire decision. The focus must always be on selecting the right candidate for the job, but be aware that an assessment technology may build in enough information management features to ensure that your hiring process is smooth, user friendly, and meets your peopleflow needs.

Assessment + ATS = Integration

If your organization has determined to use, or is currently using, an applicant tracking software, then you want to make sure that the assessment technology has the ability to integrate with that specific ATS. Integration is defined as the process of connecting two or more technology solutions together to create a seamless flow of information from one system to another. The seamless flow should be present for both the applicant and the end-user. The objective of an integration is to simplify and streamline the data collection and delivery process.

Integrations are common in the marketplace today. Many systems such as tax credit, background checks, performance management, applicant tracking, and payroll or human resource information systems (HRIS) are connected through a seamless integration. You should expect an assessment technology to provide you with a history of integrations and examples of current clients already using the assessment technology integrated with another ATS or HRIS.

Better Retention

A business objective that is directly addressed by an effective assessment technology solution is improving employee retention. Excessive employee turnover effects all organizations in the form of both direct and indirect costs. Direct costs include the placement of job postings, plus the labor hours devoted to screening and interviewing candidates. There are many indirect costs to consider as well. A few examples are down time in the vacant position, lost opportunities, overtime expenses for others to cover job vacancies, not to mention the potential negative effect on company morale.

Regardless of your current retention issues, the stakes are high and worthy of careful consideration. Cash America, an international financial services company that studied its hire-termination trends over a two-year period, conservatively calculated the direct and indirect costs for replacing a store manager at $10,000 each, and around $2,500 for each customer service representative. Whether your numbers are higher or lower, it’s readily apparent that for a company with thousands of employees, significant reductions in employee turnover equates to millions of dollars saved over time.

A common thread among much of the existing employment research is the fact that candidates who are good behavioral fits to their particular jobs tend to stay longer and turnover less frequently. It is important to recognize that employee retention is a strong indicator of an improvement effect from an assessment technology. Most companies keep detailed records of terminations for payroll purposes, which makes good business sense. No company would willingly continue to pay an individual who is no longer employed. These records may provide important data for a quality hire-termination study. For example, as part of the aforementioned Cash America study consisting of data on 3,248 employees, the hire-termination data documented that the company experienced a 43% turnover reduction in managerial positions after implementing an assessment technology.

Keep in mind that obtaining study-worthy results for all positions in the organization simply may not be possible.

Expectations for turnover studies should be appropriate to the scope of the position. Roles with small populations, lack of accurate hire and termination data, or an insufficient amount of time for data collection can affect your ability to conduct a quality study.

Better Performance

I have never met an executive who did not measure success in terms of performance. Companies may evaluate performance in many different ways, but one business rule is undeniable-improved performance comes from improving your incumbents and selecting better people. Because so many companies desire to improve their workforce, assessments are a great way to drive improvement. An assessment technology modeled after actual performance data provides a strong tool to select those who have the greatest potential to perform well in the role.

When evaluating an assessment technology, a very common question is often posed by company executives, included in requests for proposals (RFPs), and/or submitted by committees: “What is your validity coefficient?” By latching on to this statistical term, the organization is really asking, “Does it work?” Or, “Can you prove it has made other companies better in target positions?” Let’s take a moment to dissect the meaning of this question.

As we touched on in Principle #1, it is important to interpret any answer to the validity question in the context of the particular situation. Remember my golf game. If you ask me what I shoot, like any self-respecting person I am going to tell you my best score. You might think I am a decent golfer based on that one score. What I conveniently neglected to tell you was the situation surrounding that score. I left out the part about all the holes being par threes with no water, sand traps, or trees to get in the way. On an average competitive golf course, my performance would be much worse.

Interpreting validity is more than just asking, “What is your validity coefficient?” You should dig into the specifics of the situation. Pay attention to specific items such as sample sizes, types of data being studied, types of positions, or any other particular items of interest. Some studies may not, at face value, seem impressive until you understand the situation and the results based on the situation.

For example, by deploying an assessment technology, a large call center enterprise hoped to identify job candidates who could reduce the average time spent on incoming phone calls. After studying the performance of 704 employees over their first 12 months on the job, employees hired using the assessment process averaged call times that were 1.14% shorter than calls taken by their non-assessed coworkers. That translates to a savings of approximately four seconds per call, or about the time it took you to read this sentence.

At first glance, are you impressed with a 1.14% improvement? Before you answer, consider this: across the entire corporation consisting of multiple call centers, each second shaved from the average call time is valued at $175,000 over the course of a year. That four-second improvement saves over $700,000 per year company-wide, and the assessment technology has paid for itself many times over.

While there are plenty of success stories, be aware that the reverse can occur. A study may appear very impressive at first glance, but when the situation is exposed to the light, the results may be found lacking due to tiny sample sizes or some other extreme set of conditions.

Breaking down the question, “What is your validity coefficient?” a bit deeper, we find that the terms are in a singular context. Meaning, the person asking the question is asking for only one number or one value that represents the entire concept of “Does it work?” or “How has this made someone else better?” It is important to realize that a solid, proven assessment technology should be able to show many studies from different companies, positions, and situations. Each study, based on the situation, should show a relationship (in one form or another) between the assessment outcome and the performance metric. The documented volume of evidence should go way beyond one “validity coefficient” and provide massive amounts of ongoing research proving the technology has, and continues, to make other companies better.

Just as with a hire-termination study, obtaining concrete performance results for all positions may not be possible. Temper your expectations for performance studies according to the scope of the position. Small sample sizes, a lack of objective performance metrics, or an insufficient amount of time for data collection can affect your ability to conduct a quality study.

When evaluating an assessment technology, ask to see multiple client case studies that demonstrate significant performance improvements based on quality sample sizes. Reputable assessment technologies should provide access to a technical manual packed with studies that detail significant improvements in the areas of turnover and performance.

Summary

There you have it…the list of five business principles that should guide your decision on your next purchase, or upgrade, of an assessment technology. To recap, here are the five principles:

  • Principle #1: An assessment technology should be proven to predict performance.
  • Principle #2: An assessment technology should be the catalyst to continuous workforce improvement.
  • Principle #3: An assessment technology should be focused on fit; more is not always best.
  • Principle #4: An assessment technology should be more than just a score.
  • Principle #5: Assessment technology should be a tool that makes your organization better.

This is by no means an all-inclusive list, but if an assessment falls short on one or more of these principles, keep shopping. Your efforts will deliver great dividends for your company when the right assessment technology is in place.

One tip I recommend to those evaluating different assessment technology tools is to create a wish list of features and functionality. Be sure that the needs of all levels of end-users are included in your wish list. Then categorize the list into groups consisting of the “must haves” and the “like to haves.” This little exercise will help you focus your efforts during the evaluation process to ensure you achieve maximum improvement within the organization.

Leave a Reply

hopeforharmonie.co.uk | Newsphere by AF themes.