James Chandler, vice president of security for Western Governors University, has successfully moved security left by implementing a scorecard system that requires devops teams to meet certain standards and enforcing those expectations through business processes.
In short, Chandler says, teams must earn top scores to push out their code without approvals. If they don’t have those top scores, they must defend their lower marks and the reasons for them before proceeding with releases.
“It’s about pushing security left, so security starts early and flows through the development lifecycle,” Chandler says. “It’s a huge factor we’ve been trying to push as a security group, because we have better security success when the developers are taking part in it during their day-to-day [work] versus when the security team finds an issue and has to ask them to go solve it.”
The creation of a scorecard to improve security initially stemmed from concerns about the technology infrastructure’s performance.
Western Governors University is a 100% online university focused on helping the underserved population achieve higher education. The Salt Lake City-based institution serves 130,000 students, most of whom are based in the United States.
Given its mission, Chandler says he and the university’s other technology leaders consistently look for ways to improve the reliability of systems as well as the data’s confidentiality, integrity and availability (the well-known CIA triad that helps guide information security policies in many organizations).
However, Chandler and his peer VPs in early 2019 assessed performance data and concluded that the systems weren’t as reliable as they should be. They were seeing outages across the university’s platform that were negatively impacting both students and staff. Such incidents get priority attention; as Chandler notes, “If it impacts students, it quickly becomes a severity 1 issue.”
Aligning on expectations
The university’s IT and security leadership determined that existing business processes within their domains did not incentivize the successful deployment of software in accordance with the CIA triad.
Chandler explains that various factors contributed to that scenario.
The university had a complex infrastructure with more than 1,000 third-party vendors and dozens of product teams.
That in turn made security requirements, such as timely patching of vulnerabilities, challenging, as that work often impacted the work of disparate teams from security to engineering.
Moreover, security teams and IT groups weren’t aligned on expectations.
“Their ideas on how fast to resolve issues were different than ours, so we weren’t hitting the security targets we wanted because of challenges getting everyone aligned,” Chandler says, noting that the university’s goal tracks to “five-nines availability,” or 99.999% for network and service accessibility to users.
After reviewing the situation with the other vice presidents, Chandler says that he concluded that improving security performance and the overall experience for users, particularly the students, couldn’t happen without certain organizational changes.
“We came to a point where we hit a wall,” he says. “But once we identified those different factors, we started to look at what we can do to solve for them.”
Chandler collaborated with several other vice presidents (who collectively had responsibility for security as well as IT infrastructure, technology operations and educational technology) to find the best way forward.
They then created a committee to examine the issues and propose a solution, assigning seven of their directors (all of whom reported to the VPs and oversaw related areas, such as engineering and general operations).
The directors worked for a few months in fall 2019 and proposed a scorecard that would track and measure performance in five areas essentially for reliability and availability of systems: availability (99.99%), problem resolution (<30 minutes), problem record error budget (<100 severity demerits), security hygiene (<0) and root cause analysis task hygiene (<0).
“That then became part of our operations metrics, and each team has to present their scorecard and defend what the score was and why,” Chandler says, explaining that scores are electronically recorded in its ServiceNow platform as part of the agile development management processes. “If they had updates or new code to push, they had to use the scorecard.”
The scorecard, which comes with multiple enforcement layers, increased the incentive to maintain the highest levels.
For example, teams with a score of 5 have the most freedom to go to market; teams on the other end of the scale have the most additional requirements, including approval—essentially, Chandler says, “more eyes looking at you because you’re not keeping up with the program.” Levels 4, 3 and 2 have increasing levels of requirements that correspond to their scores.
This scorecard was implemented in 2020; it was optional at first and then mandatory.
Chandler says the scorecard moved the teams from their devops practices closer to devsecops.
“I don’t know if we’re fully there yet, but this was a large leap in the right direction. They start with security in the early stages and it stays through the lifecycle,” Chandler says.
It also created more accountability and alignment as well as a greater sense of shared responsibility.
“Tying [those five performance areas] together in one scorecard helps each team understand that those different areas are their responsibility with the support of the other groups. That was part of the expectation-setting, tying those all together,” he adds.
Chandler notes that the fact this concept came from the directors who work directly with the teams helped with a quick and successful adoption: “The buy-in came from the directors themselves; they’d go to their teams and said, ‘Here’s what we’re doing.’ And because it wasn’t coming down from senior leaders, because it was very local instead of coming from the top, the buy-in was a lot easier.”
Reaping the rewards
The use of a scorecard also created some healthy competitiveness, with teams aiming for high marks to earn recognition at the biweekly meetings. Chandler says it’s rare that all teams earn 5s, the highest mark, in any given two-week stretch; the majority earning 4s is more common and somewhat of the average.
On the other hand, teams with low scores are required to discuss their challenges in those meetings.
That isn’t meant to be punitive, Chandler says. Rather, the process gives teams a chance to hear ideas, garner insights and get help from others—especially as some of the factors for lower scores relate to challenges others encounter, such as working with legacy technology.
“It created a community rather than siloed teams,” he adds.
Security and performance improvements were as rapid as the adoption of this scorecard process, Chandler adds; he saw improvements in various areas, including system uptime, within a month.
More specifically, the scorecard implementation generated a 62% reduction in breached security tickets, a 72% reduction in vulnerabilities, and a 34% reduction in the mean time to remediate security issues.
Chandler sees more positive achievements ahead. He and his colleagues plan to not only continue using the scorecard but boost scorecard requirements to bring incremental improvements to the departments’ performance.
“They solved the problem and the task they were given,” Chandler says. “Now we’re looking at it and saying, ‘What areas are next that we can improve on?’”
Copyright © 2022 IDG Communications, Inc.