Developing Managerial Talent Through Simulation
Developing Managerial Talent through Simulation
In the last 35 years, simulations have been used with increasing frequency in the development of managerial talent. In various forms, simulations of managerial and organizational activities have been used to study administrative behavior (Hemphill, Griffiths, & Fredericksen, 1962), assess potential (Thornton & Byham, 1982), enhance managerial skills (McCall & Lombardo, 1978), diagnose training needs (Stumpf, 1988a, 1988b), foster team building among groups of managers (Kaplan, Lombardo, & Mazique, 1985), and evaluate the effectiveness of managerial training (Moses & Ritchie, 1975). In this article, we evaluate how simulations have contributed to the development of managerial talent by summarizing theory, research, and practice in three areas: research on managerial behavior, assessment of managerial abilities, and training managerial skills.
Definitions and Background
Our review of simulations covers what is formally called gaming simulations (Jones, 1972). A simulation is a model or representation of real-world events in which elements are depicted by symbols or numbers or in physical form. In a simulation, some essential features of an activity are duplicated without portraying reality itself (Jones, 1972)—for example, something as simple as the interactions of a manager and subordinate dealing with a performance problem on the job, or something as complex as an island nation faced with multiple economic and political crises (Streufert, Pogash, & Piasecki, 1988). A game involves one or more players who are given background information to study, rules and conditions to follow, and roles to play. The essential feature of a game is the interactive process of players and the system (Jones, 1972).
When gaming simulations are used for assessing individuals, they are often called performance tests or exercises (Cronbach, 1970) or, when set in the context of an organization, business games. We have restricted this review of business games to simulations of social interactions, general management processes, and decision making that are used for management development. We have not covered simulations of technical management functions, such as production planning or marketing, nor simulations used in management education, because these have been reviewed extensively by others (Biggs, 1986, 1987; Faria, 1987; Freedman, Cooper, & Stumpf, 1982; Fritzsche, 1987; Keys, 1987). In this review, we use the shortened term, simulation, to refer to all of these gaming simulations.
Management simulations were an outgrowth of several major fields: military war games, operations research, role playing, and performance testing. They were used largely for instructional purposes, for example, replaying battle strategies (Cohen & Rhenman, 1961; Coppard, 1976). The first computer-scored business simulation, Top Management Decisions Simulation, was created by the American Management Association for executives in 1957 (Ricciardi et al., 1957). This development was followed by the creation of the role-playing exercise (Moreno, 1959) and the introduction of performance testing for management assessment in the United States in the 1950s in AT & T’s Management Progress Study (Bray & Grant, 1966). Recently, it has been estimated that several thousand organizations are using simulation technology for management evaluation and diagnostic purposes (Byham, 1986), including the use of complex games (22%) and role plays (44%) as training methods (Ralphs & Stephen, 1986). More recently, Faria (1987) found that approximately 55% of the large organizations in the United States were using business games in management training.
In this article, we evaluate how simulations of varying complexity are used for various purposes in the process of management development. Next, we explain the theoretical rationale for the use of simulations as research tools to study managerial behavior, as assessment devices, and as learning experiences. An evaluation of the appropriate use of simulations in management development is then provided. We suggest that simulations must be used judiciously within a broad sequence of other developmental efforts, partly because simulations have certain limitations as assessment and training techniques. The key question is not if, but when simulations should be used.
Uses of Simulations
Simulations provide an effective method of studying managerial behavior. Prior to the development of simulations, much of our knowledge of leadership and managerial behavior was based on observational research or questionnaire surveys of managers (Mintzberg, 1971, 1973). These methods provided information on the nature of managerial work, the types and quantity of decisions a manager typically makes during the course of a day, and styles of managerial behavior. Unlike direct observation, simulations allow greater control and opportunity for manipulating an event and understanding subsequent behavior. Unlike questionnaires, simulations elicit overt behaviors of participants related to complex skills such as communication, decision making, and interpersonal interactions. Streufert and Swezey (1986), Fredericksen (1962, 1966), and others (Guetzkow, 1959; Mullen & Stumpf, 1987) have used simulations to expand research into areas such as cognitive complexity, administrative effectiveness, and strategic planning.
Managerial Assessment and Diagnosis
Simulations provide assessors and participants with an opportunity to evaluate skills that cannot be adequately evaluated with paper-and-pencil instruments. Simulations such as the leaderless group discussion, in which 4 to 7 managers solve a set of business problems; the in-basket, which calls for the manager to read and respond to material coming into the office; and complex organizational simulations, which may involve 10 to 12 managers operating a hypothetical business for several hours, provide rich settings for generating peer and observer feedback on managerial behavior. Simulations give the individual participant a chance to experience a segment of organizational life in a safe environment and to obtain self-insights.
Along with other needs assessment techniques (i.e., supervisory evaluations, surveys, etc.), simulations may provide useful information about the training needs of a group or an organization. A group of managers who work together can be observed in a simulation and diagnosed for training needs. After assessing a cross-section of its managers, an organization often has the information necessary to develop a training curriculum.
Simulations can also be used to predict managerial potential (Thornton & Byham, 1982). They provide a standardized method for observing behavior often lacking in evaluations of on-the-job performance. Simulations also have been employed as a criterion for evaluating the effectiveness of other assessment techniques and training devices (King & Arlinghaus, 1976; Moses & Ritchie, 1975).
Managerial skills can be developed through the use of simulations. Participation in a simulation that diagnoses weaknesses often makes the manager ready to engage in training. Other simulations can then be used to acquire or enhance skills through practice. One advantage of the simulation for skill development is that it is likely to yield greater transfer of training to the work environment because actual behaviors are practiced in the context of other managerial activities and responsibilities.
Groups that normally work together can participate and be observed under realistic, yet nonthreatening, conditions. Once problems with team interactions are identified, the simulation can be replayed to explore alternative strategies (Kaplan et al., 1985; McCall & Lombardo, 1978). Actual work situations seldom provide such an opportunity to experiment with alternative managerial styles. For potential or new employees, a simulation can be a useful method for providing realistic information about the job, which in turn may lead to lower turnover (Wanous, 1977). For current employees, a simulation can be used to provide concrete examples of how the organization’s values and culture are reflected in problem solving and decision making. The simulation also provides a method for reinforcing those values or changing them (Kaplan et al., 1985).
A Hierarchy of Simulations with Increasing Complexity
Simulations can be viewed along a continuum of complexity that reflects levels and types of involvement of the participant. Complexity of both the stimulus conditions and response requirements is relevant here. Stimulus conditions include the background information presented, organizational descriptions provided, variety of content depicted in the problems, number of people with whom the participant must interact, and stress created in the instructions for the simulation, for example, by citing consequences of failure. Response requirements can also add complexity. Simple simulations may call for the demonstration of skills to talk with one individual about a specific performance or work habit problem, whereas complex large-scale simulations require complex decision-making strategies to deal with multiple inputs, unpredictable events, or diverse and competing groups.
Whether simple or complex, well-designed simulations have content validity, that is, the stimulus material and the response requirements in the simulation represent some process important to the actual job situation. In the more simple simulations, only a narrow range of the situational variables is presented. For example, in a one-on-one simulation, only the rudiments of the problem employee’s behavior are described, and the participant may be asked to discuss only a specific work performance problem. By contrast, in more complex organizational simulations, the employee’s performance problem is embedded in structural, financial, and environmental crises, and the participant may be required to deal with personnel matters in conjunction with decisions about payroll, purchases, and external demands.
It is important to note that a simple simulation may be a quite accurate depiction of one specific aspect of a manager’s job: A high level of complexity need not be present to provide an opportunity to assess managerial competence or to learn a managerial skill. In fact, there are clear advantages to isolating a set of behaviors to provide for focused assessment or training. For example, a simple presentation exercise allows the participant to demonstrate and practice skills of organizing and communicating material without the pressures of speaking before a large and hostile audience.
Simulations do not need to replicate jobs, that is, portray a given position, job, organization, or industry. Even the most complex simulations do not portray all the history and politics in an organization, the build-up of social pressures over time, the consequences and risks of choices, and the organizational culture. As is shown in the following sections, accurate assessment and effective training can be accomplished as long as the simulation represents relevant processes of management, organizational behavior, and organizational systems (Guion, 1977; Sackett, 1987).
In the following sections, we describe a sample of simulations with different amounts of complexity. In each section, we describe the activity, give one example of how it has been applied to develop managerial talent, and evaluate its effectiveness. Specifically, two criteria are used to evaluate simulations: (a) empirical evidence regarding their effectiveness, for example, standardization, reliability, and validity, and (b) practical considerations including cost, acceptability to participants and the organization, and utility.
One-on-One Interview Simulations
One-on-one interview simulations are short, yet powerful, simulations of specific interpersonal interactions. They may simulate interactions between the manager and a subordinate, peer professional, or customer. Interview simulations are adaptations of the role-play technique developed in the 1950s to foster attitude change (Fishbein & Aijen, 1975). The participant is given time to read background information and prepare for the interaction, which usually takes only 8 to 10 minutes. In some assessment applications, participants interact with role players who are confederates and well trained to present standardized responses to participants’ comments and suggestions. In these contexts, the interview simulation may measure oral communication skill, persuasiveness, leadership abilities, and listening skills.
One-on-one simulations are an integral part of management development programs designed at General Electric (Goldstein & Sorcher, 1974) to teach skills of managing subordinates. The training is structured around problems the manager faces, for example, poor performance of a subordinate, tardiness, or dealing with discrimination complaints. Participants are given opportunities to practice and watch others practice the skills in simulations, supportive feedback is given, and participants practice with increasingly difficult scenarios. In this context, the simulations are called “skill practice sessions” to emphasize that the participant should behave as he or she would on the job.
Several evaluation studies have been carried out on programs using one-on-one simulations as the key training technique. Participant reactions are usually quite favorable, and most of the research has shown that the training is effective in enhancing supervisory skills (Burnaska, 1976; Latham & Saari, 1979; Moses & Ritchie, 1975). Conversely, some research shows little behavior change (McGehee & Thayer, 1978). Gist (1987) has pointed out that it is difficult to disentangle the contribution of the simulations themselves from the other elements of the programs applying social learning theory, such as the presentation of theory and practice.
Leaderless Group Discussion
The leaderless group discussion simulates numerous ad hoc committee situations in organizations in which problem analysis and decision making take place. Four to eight participants are given one or more written problems to solve in a specified period of time. They may also be given information relevant to different roles they have been assigned. A discussion may simulate a cooperative situation (e.g., a situation in which the participants formulate a new policy) or a competitive situation (e.g., a situation in which limited funds must be allocated among departments in a city government). No person is designated leader; the participants must display communication, decision-making, leadership, and interpersonal skills in an initially unstructured environment.
A public utility in Colorado uses leaderless group discussion to give managers self-awareness into strengths and developmental needs in decision making and interaction skills. Participants see, sometimes quite graphically, the effects of their inability to emerge as a leader of the group in terms of either the ideas they contribute or the direction they can give the group. Self-awareness is enhanced when behavioral feedback is given by other participants and the trainer. The participants view a videotape of the discussion to obtain further insights to developmental needs.
Leaderless group discussion is somewhat more complex than the interview simulation. Although this complexity reflects organizational reality, some people have criticized the use of this technique as an assessment or training tool because it is somewhat unstandardized. Despite these inherent limitations, research shows that leaderless group discussion requires participants to have effective skills to structure a situation that is initially ambiguous, help the group solve problems, and earn esteem among other group members (Bass, 1954). There is considerable evidence of reliability and validity for the technique (Bass, 1954; Cascio, 1987; Thornton & Byham, 1982).
The in-basket technique, named after the in-tray on the manager’s desk, was originally developed as a simulation technique to study administrative skills among military officers (Fredericksen, 1962, 1966) and school officials (Hemphill et al., 1962). Participants are given instructions on the position they are to assume and considerable information about the organization structure, personnel, rules and regulations, union contracts, and so forth. An in-basket usually contains letters, memos, and reports that require action, and also considerable supplementary information that may or may not be useful. The participant has typically one to four hours to make decisions, request more information, direct others to action, schedule meetings, and write letters to outsiders. In some applications, the administrator will intervene with new information, for example, announcement of a budget freeze. Two phases of the in-basket can be evaluated: the written products themselves and the participant’s description of the processes used in reviewing the data, deciding on a course of action, and organizing the work to be done.
As a training technique, the in-basket has been used as one way to practice administrative skills. A three-day program was developed for the Medical Group Management Association in which participants complete an inbasket after a series of lectures and discussion about skills such as planning and organizing, delegating, and controlling. In a group discussion, guided by the trainer, colleagues review the strengths and weaknesses of actions taken by each participant and generate alternative courses of action. Participants are given a scoring sheet to keep track of effective actions and missed opportunities related to the administrative skills. This record is used later in developmental planning.
In-baskets have been shown to have predictive validity when used by themselves, in conjunction with data from another source (e.g., a background interview or questionnaire responses from colleagues on the job), or as one component of a management assessment center (Thornton & Byham, 1982). Johnson Wax, for example, has found that an in-basket and a managerial behavioral questionnaire completed by subordinates, supervisor, and peers provide as much information about a manager as does an assessment center or a complex total organizational simulation (J. Huck, personal communication, March 16, 1988). Virtually all assessment centers for managerial development use an in-basket; the technique contributes unique information to the evaluation of numerous dimensions of managerial competency such as planning and organizing, decision making, and administrative control (Thornton & Byham, 1982).
Complex Decision-Making Simulations
Streufert has developed two elaborate, quasi-experimental simulations to study the components of managerial decision making (Streufert, 1986). They are called quasi-experimental because inputs to the participants are more controlled than in free simulations such as a leaderless group discussion. This work builds on Streufert’s long history of studying cognitive complexity, managerial behavior, and organizational effectiveness (Streufert & Swezey, 1986).
In one of the two parallel simulations, the participant assumes the position of Disaster Control Coordinator of Woodline County where it has been raining for days and where several communities downstream from the dam are in danger. The participant can give directions to public and civil personnel to prepare for a crisis. In the second simulation, the participant is the governor of the developing country of Shamba faced with several problems. In both simulations, the participant studies extensive briefing materials and deals with the situation for six hours. One individual or a group of participants can execute the simulation. Each simulation includes the use of computers, both as a source of information and as an interactive recording device. They yield several measures of decision-making competence including managerial style variables, activity level, and speed of functioning.
These simulations have been used only recently in applied settings, and thus our case study is a discussion of some of the innovative research carried out by Streufert and his associates. Streufert (1986) argued that his simulations provided the most, and possibly only, feasible medium for studying managerial decision making under conditons of uncertainty. New understandings of the structures and importance of cognitive complexity have been developed, frequently using simulation technology. At the individual level of analysis, cognitive complexity has been found to be related to various indicators of effectiveness—for example, leadership in situations of uncertainty where innovation is important, quality of decision making, and search for quality information. At the organization level, there is considerably less research, but Streufert and Swezey (1986) put forth 37 propositions relating complexity to properties of organizations, including task requirements, information demands, structure, and environmental pressure.
Initial research with managerial samples has been completed, and applications for assessment and training are underway in different organizations (Streufert, Nogami, Swezey, Pogash, & Piasecki, in press). These complex simulations hold great promise, but a number of questions must still be answered. The basic means of gathering input about the participant’s decision-making approach is from self-reports of reasons for action. More needs to be done to establish the validity of such self-reports. Other questions arise in relation to the assessment value of the simulations. Predictive validity studies have yet to be done. In particular, the relative role of the dimensions of decision-making competence assessed by these simulations in comparison with interpersonal, communication, and administrative skills assessed by simpler simulations must be established.
Large-Scale Behavioral Simulations
Each large-scale behavioral simulation presented in this section involves multiple business problems and opportunities, intensive and extensive interaction among managers, and observation by trained staff (Stumpf, 1988a, 1988b). These simulations are similar in that they require strategic decision making and organizational leadership, yet they differ in structure, context, and whether they require cooperation or competition. Because little published empirical information is available on most of the simulations, only the oldest one, Looking Glass, Inc., will be discussed in some detail. The others are described briefly.
The six-hour Looking Glass simulation was designed, developed, and tested over a three-year period by the Center for Creative Leadership (McCall & Lombardo, 1978) with support from the Office of Naval Research. The simulation was designed primarily as a research tool to generate hypotheses about managerial and organizational effectiveness and as a training technique. The simulation consists of 20 positions in three division across four levels (plant manager, director, vice-president, and president). The environments of the three divisions vary along two dimensions: stability and environmental uncertainty (Duncan, 1972).
At Martin Marietta, Looking Glass is used as a diagnostic and feedback tool in a top-level executive development program (T. Philbin, personal communication, March 14, 1988). Approximately one-half of the five-day assessment and development program is devoted to preparing for, participating in, and receiving feedback about performance in the simulation. Looking Glass is supplemented with other assessment techniques including the Myers-Briggs Type Indicator and a management practice questionnaire completed by participants, their subordinates, and the participant’s superior prior to the program.
Twenty participants are observed by five or six trainers during the day-long simulation. Both team (division) and individual feedback are key elements of the executive development program. Team feedback provides a major basis for individual performance planning, and such feedback is perceived by participants as legitimate because it is validated from multiple sources (e.g., self, other participants, and trained observers). Individual results from the management practices questionnaires are provided and discussed in light of simulation behavior. At the close of the program, participants develop their own personal development plan based on insights developed from multiple sources during the previous four days.
The primary goal in designing the Looking Glass simulation was to construct a total organization simulation that reflected the demands of a typical managerial job and to build a simulation that entices managers to run a simulated company (McCall & Lombardo, 1982). Therefore, a major consideration was the content validity of the simulation and the acceptability of the simulation to participants. There is evidence that the activity pattern of managers during the simulation is similar to the pattern found by diary and observation studies conducted with managers in the field (McCall & Lombardo, 1979, 1982). Some criterion-validity information is available, and Kaplan et al. (1985) found that at both three weeks and six months after the simulation, managers reported more positive responses on a variety of issues (e.g., management team trust, and effectiveness) than before the simulation. No long-range predictive validity studies have been reported.
Other Large-scale Behavioral Simulations
Six additional complex simulations are described by Stumpf and Dunbar (1988). Foodcorp International is a simulation representing three levels of a manufacturing organization, requiring 13 roles, two product groups, and two subsidiaries. Products are sold to retail markets within the United States and internationally. The company uses a matrix organizational structure and has several committees to augment this structure. Foodcorp has 25,000 employees and $2.7 billion in sales. Globalcorp is a simulation of a diversified international conglomerate with $27 billion in assets with 13 senior management roles across three organizational levels. There are three sectors—banking services, advisory services, and investment services—each comprising two or more subgroups. Unlike the autonomous divisional activity common to Looking Glass, Globalcorp sectors involve coordination and competition across lines of business (Stumpf & Dunbar, 1988).
Metrobank, Investcorp, and Landmark Insurance Company are simulations of companies in the financial services industry. Each has 12 or 13 senior management positions across three levels and two major product–services areas (i.e., individual and corporate/institutonal services). These firms can be used individually or in various combinations. For example, the data-processing problems in Metrobank might be resolved by subcontracting them with Investcorp (Stumpf & Dunbar, 1988). Northwood Arts Center is a simulated not-for-profit arts organization that is managed by seven directors, with expenses exceeding $3 million in the last year and a short fall of $31,000. Northwood is composed of three units: Crandall Museum (2,500 members and 100,000 visitors annually), the New Horizons Theater (14,000 subscribers and 116,000 customers annually), and the staff and support services. As with most nonprofit agencies, Northwood has many constituencies to satisfy.
An assessment center is a complex procedure using one or more simulations such as those just described and, in some cases, presentations, written cases, and fact-finding activities. Because the assessment center method is not a single simulation, it does not fit neatly on our continuum. Therefore, we present it in a separate category. The content in the simulations may be unrelated (e.g., the leaderless group discussion and the in-basket may be set in different organizations) or may be totally integrated (e.g., the in-basket may provide information that is useful in a later leaderless group discussion; Slivinski, Grant, Bourgeois, & Pederson, 1977). Multiple, trained assessors, who are usually higher level managers, observe behavior, classify behaviors into performance dimensions, rate these dimensions across exercises, and make ratings regarding overall performance and potential. Dimensions that are typically assessed include administrative skills (e.g., planning, organizing, and decision making), interpersonal skills (e.g., leadership, personal impact, and behavior flexibility), and amount of activity (aggressiveness, energy level, and self confidence; Thornton & Byham, 1982).
At middle and higher levels of management, the results of an assessment center are frequently used for diagnostic purposes. In a program developed at Kodak’s Colorado division, an assessor gives feedback first to the participant, then to the participant and his or her manager. The assessment ratings of strengths and weaknesses on performance dimensions are used in conjunction with self-evaluations and input from subordinates and the manager to identify a small number of developmental needs. For each weak dimension, the specific area to be developed is specified. Developmental follow-up steps are written down, along with the goal to be achieved, measurable indicators of improvement, and dates for completion. The developmental plan then becomes a contract between the manager and his or her boss regarding future actions for improvement.
Hundreds of organizations use assessment centers to provide information for selection, promotion, and developmental purposes (Byham, 1986). Far more research has been conducted on the assessment center method than on any alternative (e.g., tests or large-scale behavioral simulations) for evaluating managerial potential (Thornton & Byham, 1982), and meta-analyses have shown that the overall assessment center rating has substantial predictive validity in relation to subsequent managerial performance and progress (Gaugler, Rosenthal, Thornton, & Benston, 1987; Hunter & Hunter, 1984; Schmitt, Gooding, Noe, & Kirsch, 1984). However, a lack of discriminant validity among dimension ratings within exercises has led some to question to the construct validity of the assessment center method (Adams & Thornton, 1988; Klimoski & Brickner, 1987; Sackett & Dreher, 1982).
Theoretical Perspectives of Management Development Through Simulation
Three different theoretical perspectives underlie the use of simulations as methods for developing managerial talent. Theoretical justification for the use of simulations as a research tool points out their advantages over paper-and-pencil questionnaires on the one hand and observational methods on the other hand. When used as training techniques, simulations capitalize on a number of powerful adult learning principles. Finally, simulations are particularly effective assessment devices because of the psychometric principles underlying these forms of measurement.
Theoretical Basis for Research with Simulations
A simulation is an expression of a theory in which the most central components of a real-life situation are duplicated (Coppard, 1976; Raser, 1968). An organizational simulation is “valid” if a manager in the simulation behaves as he or she would under similar circumstances in the organization (McCall & Lombardo, 1979; Wernimont & Campbell, 1968). For example, Looking Glass, Inc., was designed to preserve as much of the contextual reality of organizational life as possible in order to study leadership behavior (McCall & Lombardo, 1979). Consistent with Raser’s (1968) definition of a simulation, a key objective of Looking Glass was to include examples of typical problems that managers encountered in organizations, so that simulation behavior would be representative of behavior in organizations.
Simulations provide a means of controlling and manipulating stimulus variables and thus provide a way to test theories of organizational behavior. For example, when a crisis episode is introduced in Streufert’s simulations, the behavior and decision-making activities of individual managers and groups of managers can be recorded and compared with previous noncrisis behaviors. Understanding of managerial phenomena is advanced when different methods of inquiry are used. For example, in the literature on conflict resolution, individuals who responded to surveys designed to assess attempts to negotiate or solve a problem indicated that they would take a harsher, less cooperative approach to bargaining than is observed in face-to-face interactions in a simulation (Baron, 1988; Bottger, 1984; Williams, Harkins, & Latane, 1981). Whereas questionnaires measure knowledge and beliefs about social interactions, simulations may be necessary to engage social processes and to measure the application of social skills.
Psychometric Foundations of Assessment with Simulations
Simulations are used to make assessments about competence, strengths and weaknesses, and potential. There are a number of psychometric principles underlying the design and use of simulations that contribute to their effectiveness as evaluation devices, including standardization and reliability. Simulations are more standardized than observations and evaluations occurring in the real organization. All individuals are observed in the same situation, and, when trained properly, all evaluators use the same standards. Furthermore, reliability of simulation assessment can be enhanced by training observers and by making repeated observations of individuals in similar situations (Murphy & Davidshofer, 1988). For example, multiple leaderless group discussions can be conducted to assess whether participants consistently emerge as leaders. An essential ingredient of the assessment center method is the use of multiple, situational exercises (Task Force on Assessment Center Standards, 1980).
Perhaps the most central psychometric feature underlying simulation as an evaluation device is content validity. Simulations can be designed to portray essential features of managerial jobs and organizational processes realistically and to satisfy requirements for content-valid measures (Guion, 1977). Well-designed simulations are constructed to model well-defined domains, either a narrowly defined, specific, task-related skill or behavior or a more broadly defined, complex skill such as decision making. Simulations can be designed to elicit a sample of behavior representative of a large behavioral domain, that is, a manager’s job. A set of one-on-one role-play simulations can represent the types of employee problems that a supervisor must deal with, or a complex organizational simulation can cover problems dealt with by an executive. Sackett (1987) pointed out that the content validation approach to development of simulations such as assessment centers requires careful attention not only to stimulus and response compatibility but also many other aspects of the testing procedure, such as instructions. In summary, we believe that simulations of management domains are likely to compare very favorably in content validity with more traditional paper-and-pencil tests.
Related to the issue of content validity is the relation between the features of a measurement device and the nature of the targeted content domain. For construct validity, some managerial skills are more appropriately and adequately assessed through simulation than through paper-and-pencil tests. For example, a person’s skill in providing performance feedback can be more adequately observed and assessed in a one-on-one exercise than through written scenarios of performance review sessions at work and having the examinee respond to each. Furthermore, some constructs cannot be assessed by techniques other than a simulation. For example, leaderless group discussions tap the emergence of a leader, which can be assessed only in the context of a group interaction.
Finally, Wernimont and Campbell (1968) and others (Asher & Sciarrino, 1974; Howard, 1983) have suggested that samples of behavior should be more predictive than signs of aptitudes or predispositions to behave (as indicated by traditional tests of future performance). According to the behavioral consistency approach, simulations in general should have higher predictive validity coefficients because the work sample or simulation and future performance are two different measures of job performance (Asher & Sciarrino, 1974; Wernimont & Campbell, 1968).
Theoretical Basis for Simulation as a Training Technique
Any well-designed management training program will use a range of techniques including didactic methods, simulations, and on-the-job experiences. In fact, a simulation is likely to be most effective when used in conjunction with more structured training methods such as lectures, reading assignments, and demonstrations (Manz & Sims, 1981; Pinolli & Anderson, 1985), which can provide an important conceptual framework (Bandura, 1986; Kolb, 1984) that participative methods by themselves often lack. Simulations employ many of the principles of adult (Knowles, 1970) and social learning theories (Bandura, 1977, 1986; Cooper, 1982; Goldstein & Sorcher, 1974; House, 1982). Active participation in the learning process, particularly important for adults (Mehta, 1978), allows the manager to experiment with alternative styles of behavior (GagnÃ©, 1970; Kolb, 1984).
Participation with other managers allows the manager to learn vicariously by observing and modeling the successful behaviors of others (Bandura, 1986). Knowles (1970) has aruged convincingly that adults learn most effectively through interactions with other adults. Simulations such as role plays and in-baskets encourage careful observations of others and introspection about one’s own beliefs and behaviors (House, 1982). Managers can then internalize those behaviors that lead to successful outcomes for themselves and for others and discard those behaviors that do not (Cooper, 1982; Kolb, 1984).
Simulations provide a significant opportunity for transfer of training because many of the conditions that foster transfer are present. Of most relevance to our discussion is the training design. Training tends to transfer when there are identical stimulus and response elements in the training program and the job situation, when the training covers general principles rather than rote practice of behaviors, when there is variability in the types of problems covered in the content of training, and when there are several different conditions and situations for practice (Baldwin & Ford, 1988). When considering simulations with differing levels of complexity, the training designer faces a clear dilemma. Simple and shorter term simulations provide opportunities for practice with a wider range of problem situations. Longer simulations, requiring several hours to execute, assume one-trial learning from one scenario. The more complex simulations maximize the number of identical elements and may convey general principles rather than narrow, specific skills.
Finally, Logan (1985) has articulated a distinction between skill and automaticity that is helpful for deciding when and what type of simulation should be employed. The term skill applies to performance on a complex task, whereas automaticity refers to specific properties of performance on tasks that can be performed effortlessly. In management training, automatic processes might be accomplished or developed best through more simple simulations such as role-play exercises. Complex simulations would be appropriate once automatic processes are already acquired and when a full set of skills is needed to perform successfully. The appropriate use of simple versus complex simulations may depend on the individual’s developmental stage of skill acquisition. We explore this idea more fully in the next section.
Evaluation and Implications
A Development Sequence
Simulations should be used in a planned and purposeful manner, progressing from the less complex in the earlier stages of development to the more complex in later stages. This recommendation is based on a concept of managerial readiness to develop that is analogous to the concept of readiness to learn (Craig, 1983). Moving from less to more complex simulations capitalizes on the special advantages of each type of simulation and minimizes their disadvantages.
The sequence of developmental experiences proposed here starts with more didactic forms of learning such as lectures and readings, proceeds to the use of simple demonstrations and controlled discussions, then involves the use of simple simulations to practice isolated, basic tasks, and only then culminates in the use of complex simulations to practice complicated skills. On-the-job experience then allows the manager to put the skills into practice. It is our contention that the methods using high involvement will be effective only if basic knowledge and rudimentary skills have been acquired. This contention is supported by research (Britton & Tesser, 1982; Langer, 1984; Lesgold, 1984) that has shown that the prior knowledge and experience that a learner brings to a task will affect comprehension, thinking, and problem solving.
Several theoretical positions support our concept of managerial readiness to develop. Increasing realism in training procedures is good up to a point, then serves as a detriment to learning because the learner misses the underlying relationships in complex simulations (Norris, 1986). Similarly, in highly active situations, the learner has little time for reflection (Keys, 1987). We suggest that there are individual differences in the level of simulation complexity that managers are ready to handle in the development process. As Bandura (1982) has pointed out, self-efficacy develops when there are multiple opportunities to be successful, other people can be observed performing successfully, credible people provide realistic social support, and the situation is not so emotionally arousing that the person questions his or her own competence to succeed. Simple skills can be learned in isolation in controlled conditions, such as a one-on-one interview simulation or an in-basket monitored by a trainer.
For inexperienced managers, large-scale behavioral simulations of organizations may not provide the conditions necessary to foster efficacy (Bandura, 1982). The premature use of highly complex simulations may actually negate several of these principles. We believe that the utilization and application of skills should be accomplished through challenging, vigorous simulation exercises. However, the manager should first possess a minimal level of readiness, including the basic tools (i.e., knowledge or skills) necessary to perform. If the participant has not acquired these basic skills, there is a potential that all the wrong conditions for effective development are present: Performance does not lead to success, it is unclear what actions are appropriate, and emotional arousal and subsequent feelings of inadequacy may be high.
Evaluation of Simulations for Development of Managers
Each level of complexity of simulation has a place in the total set of management development activities. In deciding what and how simulations will be used in any given management development program, the professional should consider whether that application uses sound theory, whether empirical evidence exists to support that application, and whether it is feasible to apply the simulation in an appropriate way. The decision to use a simulation and the choice of simulation complexity should be based on whether a given application uses principles relevant to the intended outcome. For example, if the purpose of the development activity is assessment for diagnosis, the simulation must yield reliable and distinguishable measures of distinct managerial competencies. Then it is possible to prescribe training for specific skills. With regard to training applications, the practitioner must make sure the simulation fosters development of the correct type and level of managerial skill that matches the job needs (e.g., long-term planning is much different from the short-term planning). More important, if we expect the simulation to lead to development of a skill per se, there must be some description of expected effective behavior, multiple opportunities for practice, guidance and reinforcement, and other conditions that foster learning.
The second consideration when evaluating a simulation is the empirical evidence regarding its effectiveness for the intended purpose. There is a joint responsibility of the developer and the user in this regard. The developer has the responsibility to conduct and publish research showing the effectiveness of an assessment technique (Standards for Educational and Psychological Testing, 1985) or a training procedure. The user has the responsibility to limit application to validated uses (American Psychological Association, 1981).
The third set of factors to consider when evaluating any simulation technique is practical ones. The user must be willing to devote resources to carry out the assessment technique or training technique as it was intended and validated. This is nothing more and nothing less than following standardized procedures required by basic testing practice and experimentation. If the user short-cuts the necessary elements, the procedure may be rendered ineffective. For example, if adequate observer training is not provided for assessors or the participants themselves, otherwise content valid simulations may not provide valid assessment of managerial skills. When simulations are used as training devices, they usually must be conducted in conjunction with a model of managerial competence for the user organization, clear descriptions of managerial skills, opportunities for coaching and reinforcement, debriefing of the simulation experience, and opportunities for subsequent training and evaluation. The effort to develop managers via the simulation may be a failure, not because of any inadequacy of the simulation, but because it was not conducted in the context of a coordinated program. Furthermore, the most powerful and well-validated technique may fail without top-level support.
Simulations have been demonstrated to be effective over the past 30 years, and they will continue to play a pivotal role in the development of managers to meet future challenges. We recommend that simulations be used more frequently to train managers to deal with problems before they occur. For example, simulations might be developed to provide managers with experiences with multi-ethnic or international workforces, with a larger percentage of older workers, and with more part-time and temporary workers.
Organizations faced with the necessity of trimming management ranks must ensure that the remaining managers are maximally effective. Young managers beginning a career and managers assuming greater responsibilities will not have the luxury of long break-in periods and will be expected to become effective more quickly. Simulations provide a mechanism to accelerate the learning process for both new and experienced managers. They provide the means to diagnose specific training needs and thus ensure good use of shrinking training and development dollars. As training techniques themselves, simulations provide opportunities for managers to learn skills in a safe environment and avoid costly mistakes in actual job settings. In line with our concept of readiness to develop, we need better methods to determine the individual’s stage of development in order to assign the manager to the simulation with the appropriate level of complexity.
Organizations cannot afford to put large groups of management trainees in temporary assignments and weed out the less talented. Simulations provide a viable and cost-effective means to test out and develop managers in realistic situations. Many of the simulations reviewed in this article have demonstrated their effectiveness in decades of practical applications. Different types of assessment exercises can be developed by utilizing different combinations of stimulus and response modalities beyond the standard written and oral response to verbal and written presentations. For example, stimuli such as a social interaction could be presented on a videotape, and participants could be asked to respond in writing. Along these lines, computer and video disk technology could be used to assess managerial decision making in social situations. Newer simulations are emerging that promise to add to our arsenal of diagnostic and training techniques. With careful thought, simulations with varying complexity can make substantial contributions to the development of managerial talent.