Why This Page Exists
If you're choosing where to spend four years and tens of thousands of dollars, you deserve to know how the rankings you're reading were built. Every ranking site makes choices about what to measure and how to weight it, and those choices shape which schools appear at the top. We think you should be able to see ours and decide for yourself whether they match what matters to you.
Hakia's rankings cover 1,704+ accredited degree programs across 20 technology fields and all 50 states. Every score comes from the same algorithm applied to the same federal datasets. We don't accept payments from schools to adjust rankings, and no member of our editorial team manually overrides algorithmic results.
The Scoring Model
We use a 4-factor weighted composite score, normalized to a 0-100 scale. The top-scoring program in any given ranking receives a 100; every other program is scored proportionally. Here are the four factors and why we chose them.
Program Completions: 35% Weight
This measures how many students graduate from a specific program each year, identified by CIP (Classification of Instructional Programs) code. It's our heaviest factor because it captures something that graduation rate and selectivity miss: whether a school has a real, functioning department in the field you care about.
A university might have a 95% graduation rate and a 5% acceptance rate, but if only three students graduated from their computer science program last year, that tells you something important about the department's scale, resources, and faculty investment.
We apply square-root normalization to this factor. Without it, massive state universities with 2,000+ graduates per year would dominate every ranking, and a strong program graduating 200 students would barely register. The square root compresses the gap: a program with 400 graduates scores higher than one with 100, but not four times higher. This rewards established programs while keeping mid-sized departments competitive.
Source: IPEDS Completions survey, filtered by CIP code for the specific field.
Graduation Rate: 25% Weight
This is the percentage of students who finish their degree within 150% of the expected time (six years for bachelor's programs, three years for associate's). It's the single best proxy for whether a school actually supports its students through to completion, or whether it admits them and lets them figure it out.
We use the institution-wide graduation rate rather than a program-specific one because IPEDS doesn't report graduation rates by major. This means a school's CS graduation rate is approximated by its overall rate. It's an imperfect measure, but it still differentiates well: a school where 90% of students graduate is doing something fundamentally different from one where 30% do.
Source: IPEDS Graduation Rates survey (GR and GR200 components).
Selectivity: 20% Weight
We invert the admission rate: a school that admits 10% of applicants scores 90 on this factor; one that admits 60% scores 40. More selective institutions tend to have stronger academic environments, more competitive peers, and graduates who are more attractive to employers. Not because selectivity causes quality, but because it correlates with the resources, faculty, and culture that do.
The tricky part is community colleges. Most have open admission policies, meaning their acceptance rate is 100% and their selectivity score would be zero. That's not fair to them. Community colleges serve a different mission, and many run excellent technical programs. So for open-admission institutions, we substitute the graduation rate as a proxy for selectivity. A community college with a 60% completion rate earns a 60 selectivity score, which rewards the ones that maintain academic rigor without penalizing open-access philosophy.
Source: IPEDS Admissions survey (ADM component). Open-admission status from Institutional Characteristics.
Career Outcomes: 20% Weight
This factor uses state-specific median salary data from the Bureau of Labor Statistics for occupations in the program's field. For computer science, that's software developers (SOC 15-1252); for cybersecurity, it's information security analysts (SOC 15-1212); and so on for each of our 20 program fields.
We use state-level rather than national salary data because where you study often determines where you work, at least initially. A computer science program in California feeds into a job market where the median software developer earns $145,770. The same degree in Mississippi feeds into a market at $88,430. Both are valid outcomes, but the difference is real and worth reflecting in a state-specific ranking.
This factor is the same for every school within a given state + program combination. It doesn't differentiate between Stanford and San Jose State for California CS. The other three factors do that. What it does is ensure that state rankings for high-paying fields in strong job markets reflect the economic opportunity available to graduates.
Source: BLS Occupational Employment and Wage Statistics (OEWS), annual state-level data.
How Degree Levels Are Handled
The same 4-factor model with the same weights applies across all degree levels: associate's, bachelor's, master's, and doctoral. We rank each level separately. A bachelor's program competes against other bachelor's programs in the same state and field, never against associate's or master's programs.
The main difference by level is in the graduation rate window. Associate's programs use a 3-year completion rate (150% of the expected 2-year timeline). Bachelor's programs use a 6-year rate. Graduate programs use the institutional graduate completion rate where available.
We also apply a minimum threshold: a degree level only appears in our rankings for a state if at least 2 schools offer programs at that level. A state with one doctoral program in cybersecurity doesn't get a doctoral ranking because there's nothing meaningful to compare.
Online Program Rankings
Our online rankings use the identical methodology: same four factors, same weights. The only difference is the pool of schools: we filter to institutions flagged in IPEDS as offering distance education in the relevant program field. This includes fully online programs, hybrid programs, and schools where the majority of coursework can be completed remotely.
Online programs often show lower graduation rates than their on-campus counterparts. This isn't necessarily a sign of lower quality. Online students are more likely to be working adults, part-time students, or career changers with different completion patterns. We don't adjust for this. The algorithm treats an online 45% graduation rate the same as an on-campus 45%. Prospective students can weigh that context themselves.
Where the Data Comes From
Every data point in our rankings traces back to one of three federal sources. We don't survey schools, we don't accept self-reported data, and we don't use third-party aggregators. Here's what we pull from each source and when.
What About ABET Accreditation?
ABET accreditation is the gold standard for computing and engineering programs. It means a program's curriculum has been reviewed by industry and academic experts and meets established quality standards. For fields like computer science, software engineering, and computer engineering, ABET accreditation matters to employers, and some won't consider candidates from non-ABET programs.
We display ABET accreditation status on school profiles where applicable, but it is not a factor in our composite score. The reason: ABET accreditation is binary (you have it or you don't), while our scoring model needs continuous variables that differentiate across a spectrum. Including it as a scoring factor would create a cliff where every ABET school gets a bonus and every non-ABET school gets penalized, which doesn't reflect the gradual differences in program quality we're trying to capture.
We believe ABET accreditation should inform your decision, which is why we surface it prominently. But it shouldn't be the only thing that matters, and baking it into the algorithm would overweight a single credential at the expense of the graduation rates, program size, and career outcomes that also signal quality.
Affiliate Relationships and Editorial Independence
Hakia earns revenue through affiliate partnerships with education platforms. When you click certain links or widgets on our site and enroll in a program, we may receive a commission. This is how we fund the data work, research, and infrastructure behind these rankings.
Affiliate relationships do not influence rankings. Our scoring algorithm has no input for "is this school an affiliate partner" because that input doesn't exist. Schools cannot pay to improve their rank, appear higher in results, or be featured in our top program cards. The affiliate widgets on our pages promote education platforms broadly, not specific ranked schools.
We disclose this because transparency about revenue models is part of what makes a ranking trustworthy. If you see an affiliate widget on a ranking page, know that it exists alongside the rankings, not because of them.
What Our Rankings Don't Capture
No ranking system is complete, and we'd rather be upfront about the gaps than pretend they don't exist.
- Teaching quality: IPEDS doesn't measure how well professors teach or how engaging the coursework is. Graduation rates are a distant proxy at best.
- Student experience: Campus culture, student organizations, research opportunities for undergrads, and the day-to-day experience of being a student aren't captured in federal data.
- Program-specific graduation rates: IPEDS reports institution-wide graduation rates, not by major. A school's CS program might retain students better or worse than the school average.
- Employer perception: Some employers strongly prefer graduates from specific programs. This reputation factor isn't in our data.
- Cost of living: A $145,770 salary in San Francisco has different purchasing power than the same salary in Austin. We report raw salary data without cost-of-living adjustment.
- Recent changes: IPEDS data lags by about two years. A program that hired five new faculty last year or launched a new AI specialization won't show those improvements until the next data cycle.
Rankings are a starting point, not a verdict. Use ours to narrow your list, then visit campuses, talk to current students, and evaluate the things data can't measure.
How Often Rankings Are Updated
We refresh our rankings when new IPEDS and BLS datasets are published, typically once per year. The current rankings use IPEDS 2023 (released late 2024) and BLS OEWS 2024. When a new dataset is released, we re-run the entire scoring pipeline across all 1,704+ programs and publish updated rankings within 30 days.
Between major data releases, we update editorial content (career guides, program descriptions, enhanced research on top schools) on a rolling basis. Each page displays its last-verified date so you can see how recently the content was reviewed.
Methodology FAQ
Data Sources
Institutional characteristics, degree completions by CIP code, graduation rates, admissions, tuition, financial aid, distance education data
Occupational employment counts and wage estimates by state and metro area
Post-graduation earnings, debt-to-earnings ratios, federal loan repayment rates
Classification of Instructional Programs used to identify specific degree fields
Taylor Rupe
Co-founder & Editor (B.S. Computer Science, Oregon State • B.A. Psychology, University of Washington)
Taylor combines technical expertise in computer science with a deep understanding of human behavior and learning. His dual background drives Hakia's mission: leveraging technology to build authoritative educational resources that help people make better decisions about their academic and career paths.