Sections

Commentary

Government Efforts to Assess Agency IT Planning Need to Improve

Many have attempted to shed light on federal information technology (IT) projects and improve them for increased cost efficiency and service delivery. However, waste and failure continue to pervade IT projects through cost overruns, insufficient planning, lack of leadership, poor communication, bad project management, and schedule delays that, in turn, end up costing taxpayers billions (e.g. HealthCare.gov, SBInet, Expeditionary Combat Support System). For decades, the federal government has attempted to curb such wasteful spending on low efficiency, duplicative, or low priority IT investments. Over the past 10 years, the Office of Management and Budget (OMB) has been particularly active in creating tools for agencies to regularly assess their IT maturity, Chief Information Officer (CIO) objectives, enterprise architecture, and investment level against common criteria for IT planning to improve project management, transparency, accountability, and performance. The OMB’s efforts include the Federal IT Dashboard, PortfolioStat, and TechStat—each seeking to affix metrics to IT projects. Each of the efforts has encountered significant challenges that mitigated their effectiveness.

IT Dashboard – Inconsistencies and Subjectivities

In 2009, the OMB launched the Federal IT Dashboard to report the performance of major IT investments in an effort to increase transparency and facilitate public monitoring of public investments. In addition to public accountability, the OMB began using it to identify at-risk investments. The site displays agencies’ cost, schedule, and performance data for over 700 major IT investments. They are intended to offer a near-real-time perspective on the performance of the investment as well as a historical perspective.

Investments are evaluated on a subjective risk scale that consists of five risk ratings: 1) high risk, 2) moderately high risk, 3) medium risk, 4) moderately low risk, and 5) low risk. These ratings are derived from the Investment Evaluation by Agency CIO also known as the CIO Rating. Each CIO must assess their IT investments against six pre-established evaluation factors identified by the OMB and then assign a rating based on the CIO’s best judgment.

Investment Evaluation Factors Identified by OMB for Assigning CIO Ratings

Evaluation Factor

Supporting Examples

Risk Management

  • Risk management strategy exists
  • Risks are well understood by senior leadership
  • Risk log is current and complete
  • Risks are clearly prioritized
  • Mitigation plans are in place to address risks

Requirement Management

  • Investment objectives are clear and scope is controlled
  • Requirements are complete, clear, and validated
  • Appropriate stakeholders are involved in requirements definition

Contractor Insight

  • Acquisition strategy is defined and managed via an Integrated Program Team
  • Agency receives key reports, such as earned value reports, current status, and risk logs
  • Agency is providing appropriate management of contractors such that the government is monitoring, controlling, and mitigating the impact of any adverse contract performance

Historical Performance

  • No significant deviations from planned cost and schedule
  • Lessons learned and best practices are incorporated and adopted

Human Capital

  • Qualified management and execution team for the IT investments
  • and/or contracts supporting the investment
  • Low turnover rate

Other

  • Other factors that the CIO deems important to forecasting future success

Source: GAO Report: IT Dashboard: Agencies Are Managing Investment Risk, but Related Ratings Need to Be More Accurate and Available (GAO-14-64)

Additionally, agencies must submit objective data—forms Exhibit 53 and Exhibit 300– to the IT Dashboard annually and at least two times year, respectively. Exhibit 53 submissions are annual submissions that include agencies’ IT investments. This submission facilitates a process between the OMB and agency to agree to which IT investments will be reported in the next President’s Budget Request as well as confirm the mapping of the agency’s IT investments. Information included on Exhibit 53 are related to mission delivery and management, infrastructure, automation, telecommunications, enterprise architecture, capital planning, grants management, security, cloud computing, and IT reductions and reinvestments. For Exhibit 300, a high level summary of the planning, budgeting, acquisition, and management of IT assets is detailed. This submission is used to review investments’ current status and assess if the investment is meeting its goals. Data from Exhibit 300 is submitted normally in August or September to the OMB and are updated in January in support of the President’s Budget to Congress.

Agencies use a combination of those details to identify their risk status and next actions. For instance, the Department of Agriculture’s Farm Program Modernization was rated high-risk in December 2012 after receiving a low-risk rating every month prior to December. The rating was changed after a November 2012 TechStat update revealed problems with the program. Afterwards, leaders assigned 17 action items to correct the problems. Agencies with low risk ratings generally document processes that detail their monitoring and oversight. For example, the Department of Veteran’s Affairs uses a process that relies heavily on deliverable dates as a key requirement to reflect an investments’ progress while the Department of Energy uses reviews on a quarterly basis of all major IT investments and requires regular corrective actions to programs not meeting requirements.

Despite the Dashboard’s goal of providing information to the public on IT investments, officials have made many critical errors. The GAO has found several inconsistencies with regard to data accuracy and reporting standards. For instance, action items on a variety of matters were selectively completed for a variety of reasons. The Social Security Administration resets an investment’s cost and baseline performance annually, which increases the risk of undetected cost or timeline variances that can impact investment success. Their updates require different consideration than other agencies. The VA did not update ratings regularly because it did not have the ability to automatically submit ratings for 19 months.

Additionally, the OMB does not regularly update the public version of the IT Dashboard, as requested by the President’s budget. The Dashboard has gone without an update for over a year at a time, which precludes it from being a tool of public oversight. Without up-to-date information, a circuitous cycle of selective reporting and selective transparency continues. For example, in July 2012, the Department of Justice downgraded an investment; however, the update was not reflected in the IT Dashboard until April 2013, which is not an accurate portrayal of the work being done.

TechStat – Effective When Used, But Not Used Often Enough

OMB initiated TechStat in 2010. TechStat is a face-to-face, evidenced-based accountability review of IT investments that is powered by data from the IT Dashboard and other sources. In addition to the IT Dashboard, internal information such as investment scoring, cost schedule, and performance data are submitted to OMB via TechStat paperwork. External data such as GAO reports, Office of Inspector General reports, news, and human intelligence are entered. The review of IT investments would allow for the federal government to intervene or terminate IT projects that were failing. OMB and agency leadership participate in TechStat sessions; additionally, CIOs were empowered to hold their own sessions.

A TechStat is triggered when an agency determines that one of their projects is underperforming. They make this determination by using data risk rankings from the IT Dashboard. The TechStat meeting includes a briefing of the management of the investment, program performance data, and opportunities for corrective action. The intention of the TechStat is to end the session with next steps and a formalized, concrete action plan.

Some agencies have taken TechStat a few steps further. The U.S. Department of Interior (DOI) implemented their version of TechStat called iStat. iStat takes a 360-degree approach, not just the IT investment but provides a comprehensive view of the projects functionality, accountability, and performance issues. The iStat process consists of two bodies: the iStat Performance Review Board and the iStat Executive Committee (IEC). The review board assesses the investment for performance, compliance, and to recommend corrective actions. The review board’s assessments and recommendations are then forwarded to the IEC for actions. The DOI has accomplished $50 million in cost avoidance through the termination of two projects and other structural reforms for other investments through the iStat process.

In 2013, the GAO (GAO-14-671T) reported that TechStat sessions were very effective in identifying weaknesses within agencies; however, the sessions were resulting in limited impact to the risky projects. TechStat identifies problematic programs but doesn’t have a mechanism to make necessary changes to those projects. Additionally, the GAO reported that the number of TechStat sessions conducted was relatively small compared to the current number of medium and higher risk IT investments. The GAO found that 19 percent of at-risk investments received a TechStat session with little explanation as to why agency TechStat sessions varied. For instance, the Department of Commerce had 58 percent of their at-risk projects reviewed while the Department of Health and Human Services had 13 percent of their at-risk projects experience a TechStat session. Overall, more work is needed to track outcomes related to TechStat sessions. Additionally, more project sessions need to be facilitated for the process to work effectively.

New TechTank Blog Posts Are Available Here