Following the recent US Supreme Court ruling on Affirmative Action, universities are scrambling to (re)define their admissions policies. Justice Thomas’s criticism of Affirmative Action is that it creates uncertainty detrimental to high-performing minority graduates. A July 2023 NY Times article desribes this as: “At Yale, [Thomas] was one of only 12 Black students in his law school class, admitted the year the law school introduced an affirmative action plan. His white classmates viewed him as a token, he felt — a belief in the corrosive effects of affirmative action that was only deepened by his failure to win the law firm job he had dreamed of. “I’d graduated from one of America’s top law schools, but racial preference had robbed my achievement of its true value,” [Thomas] later wrote.” A neoliberal approach would let the market sort things out: Universities admit candidates as they wish, but must archive and disclose a nutrition facts-style label to students and employers about how they made the decision. Here’s a modest proposal of how it could be implemented…

  • Force admissions offices to define and declare what they care about. Applications are scored in a variety of categories which must be disclosed (race, socioeconmics, SAT scores, underwater basket weaving skills, etc.). Admissions offices are encouraged to adopt algorithmic procedures (ML, AI) for assigning scores to theese categories, as this allows for reproducibility. Humans are inscrutable, and so are the forms of ML that might be used in such a procedure; the advantage of the algorithm is that we can run it again. Scores by human readers, coaches, etc, also record the identities of these readers, so as to enable later statistical determinations of biases, bribery, etc.
  • Force admissions offices to define and declare the relative weights and interactions of how these factors contribute to the admission decision. Scores for each category are inputs to a symbolic model. This might be as simple as a linear regression, but could be some other symbolic regression model, decision list, etc. Universities may allocate whatever weight parameters they wish to race, ethnicity, socioeconomics, SAT scores, essays, legacy admissions, athletics, target number of underwater basketweaving majors, etc.
  • Full disclosure, and no cooking the books. Universities submit their inputs, algorithm they used (tracked version of the model), and weights associated to a central clearing house (either Federal or privately run by the College Board, etc.) that runs the models and sends admission announcements. This clearing house uses Gale-Shapley perfect matching to assign students to schools, following the precedent set by medical residencies assignments. (Perfect matching has little to do with affirmative action, but I just think it is a good idea, so I’m going to throw it in.)
  • Keep a long-term record of what happened and why. The central clearing house maintains a repository of the inputs and algorithms; this process is a requirement for universities to receive federal funds. This contents of this repository provide a corpus for algorithmic fairness researchers, subject to suitable anonymization. When a student agrees to attend a school, they receive an NFT that records: (i) what their model was that admitted them; (ii) what their scores on the various categories were; (iii) what their rank was in the admissions pool. In other words, they receive a “transcript” of why they were admitted.
  • Enable employers to query the record, and find out if they care. The hiring market could choose to make use of this information or not, but to do so they have to make an inquiry to the central clearinghouse. After graduation, employers may choose to request (and candiates may choose to provide) access to this scoring information, to assure employers that they were admitted for the “right” reasons. Blockchain, mumble, mumble, zero-knowledge prooof, mumble, muble… Tracking access to this information would allow us to determine whether Justice Thomas’s critique is valid (Do employers care enough about why students were admitted to make a request?), which would inform subsequent decisions by the court. Assuming that the employers care at all, a truly market-based approach would experiment with pricing the queries to the clearinghouse data to determine how much it is worth to employers, with the revenue raised by this contributed to a scholarship fund.

What could possibly go wrong…?