A Little Primer on Why the Silicon Valley Bank Thing Matters

The following views are personal and based on my experience working in and around technology, venture capital, and the financial services industry. They do not represent the views of any employer or client of mine, past or present.

(2023-3-11 4:48pm ET Update: This two part (part 1 and part 2) Twitter space with VC Bill Ackman among others may also be of interest; he also discussed as the start of part 2 and idea about creating a consortium of VCs to address all this).

(2023-3-11 3:24pm ET Update: the “All In” podcast from this week goes much deeper into what’s below as well as other related issues.)

(2023-3-12 9:00pm ET Update: The Federal Reserve, US Treasury, and FDIC issued a statement, and concurrently the Fed announced a new program, the Bank Term Funding Program, to back stop SVB depositors. I have made some updates, in italics, to what I wrote below Saturday to reflect this latest development.)


Since we’ve all moved on from generative A/I explainers to venture capital explainers, I thought I’d do a VERY simple explainer for my non-tech, non-VC, non-investment banking friends about why this Silicon Valley Bank (“SVB”) thing is a big deal.

1. Organizations with large endowments – think universities like Harvard and pension/retirement funds as well as family offices – usually invest most of their money in relatively safe investments. Most of them also take a small %, usually single digits, and invest it in higher risk investments .

2. A primary, though not only, vehicle through which they do their higher risk (and hopefully high reward investments) is by investing in venture capital (VC). Specifically they write a check for, say, $10M to a VC and become a Limited Partner (LP) into one of a VC firm’s funds constructed and offered by a VC firm (here’s a list of First Round Capital’s funds for example; note nearly all these funds are open only to high net worth individuals and institutions). A fund essentially represents a group of a VC firm’s investments during a period of time (and a specific year’s investments are sometimes called a “vintage”) as well as potentially a specific investment strategy/focus (e.g., a VC firm may also offer a fund up that is just focused on, say, AI/ML or biotech).

3. The general partners (GPs) of the VC firm — think folks like Ben Horowitz at A16Z or Jason Calacanis — then make investments to entrepreneurs (startup founder/CEOs). So, for example, to make the math easy, Jason Calacanis might in his most recent (hypothetical) $100m fund have 10 LPs that invest $10M each ($100M total) and with that $100m he, as the GP, will write, say, 20 checks of $5M ($100M) each to 20 different startups (again, I’m simplifying the math). The VCs get paid (usually) by the “2 and 20” also common in hedge funds and private equity (VC effectively being a flavor of PE though not all VCs like being grouped in like that) — they charge a 2% management fee to those LPs and keep 20% of any investment return. That 20% may be realized a decade or more later (e.g., when, say, 1 of those 20 companies has a multibillion dollar IPO after a decade of the company being built and growing, while the other 19 have failed and gone out of business — VC business model is built on 1-2 big winners, a few other decent exits, and most companies going to 0), which is also how long the LPs may need to wait (I am glossing over a ton of stuff here about how/when GPs may choose to distribute gains along the way; tl;dr is that the ROI for LPs is nearly always measured in years. These are long term investments).

4. When the entrepreneurs – the CEOs of a startup – get those $5M checks they have to deposit them in a bank of course. As has been widely reported in the press, >50% of those funds end up in Silicon Valley Bank as that is by far the most dominant bank in this space that provides startups with the core banking services any small business would recognize (checking, loans, etc). It is NOT just startups in California — SVB is (was) the dominant commercial (== checking, etc) bank for technology startups generally.

5. Then there was the triggering event, the failure of SVB. Why did it fail? That’s where a deeper, longer discussion of banking is required. I’d refer you to explainers like this one from Marc Rubinstein. You also may find yourself googling “It’s a Wonderful Life”, “Liquidity Coverage Ratio”, “Held to Maturity”, and “borrow short, lend long”. It’s this last phrase that is key. SVB “borrowed short” by taking deposits. At it’s simplest, banks make money by taking deposits and lending (or investing) those deposits to get a return. They may return to depositors a portion of that return (i.e., the small interest rate you get in your checking or savings account — btw, a side effect of all this is those rates may rise to encourage you to keep your deposits in banks rather than taking them and investing them in, say, t-bills yourself), but the amount the banks keep from the money they make with depositors money is how they make money.

As we now know, a significant portion of SVB’s depositors’ money (i.e., the startups capital they received through VCs that is ultimately the capital of LPs) had been invested in longer term fixed income such as US treasuries when, critically, interest rates were historically low. While these fixed income investments are considered very safe (compared with, say, the complex derivatives that were at the root of the 2008 banking crisis), they lost value, in current terms (“mark-to-market”) if they needed to be liquidated on short notice, as interest rates rose, which they did dramatically last year. This twitter thread from January goes into depth on what was going on.

Recall from Macro Econ 101 that when interest rates rise, prices (i.e., how much those bonds could be sold to someone else for — their “mark-to-market” value) of (previously issued) bonds at the now comparatively lower rates go down.

To provide a very simple example, if you bought a 2%, 1 year bond at $100 (pays you $2 at the end of one year), and then the very next day there are 4%, 1 year bonds for sale at $100 (that pay $4 a year), no one is likely to want to buy your 2% bond for $100. How much would you be able to sell that bond for? Probably $50 (actually just a tad more, given the pay out will be one day earlier, but I digress). Why $50? Now that there are 4% bonds in the market, by paying you $50 for that exiting bond, they’d realize that same 4% rate given $2 is 4% of $50.

So this is the key –> As interest rates rose, the current market value of SVB’s bond portfolio, which represented a large chunk of their depositors’ money, declined to such a point that it was no longer clear that the current value of those bonds could cover all of SVB’s deposits. Their “lend long” bond value was less than their “borrow short” deposits which need to be returned to depositors (in the form of, say, an ATM withdrawal) at a moment’s notice. (It also appears that SVB may not have sufficient hedged their bond investments, but that’s way beyond scope of this piece).

6. With SVB failed, these entrepreneurs and startups can’t (couldn’t) access to some or all of that $5M they had deposited in SVB — that money they had received from through VC’s GPs and which really originally came from those LPs like universities and retirement plans. (If a CEO withdrew all of her company’s $5M before this week they are ok of course. Ditto if they did not use SVB.) Key things to keep in mind — that money in SVB is really at the end of the day university and pension fund (i.e., VC LPs) money.

7. This is where the FDIC comes in. The FDIC backstops the first $250K. From there, what happens to the rest of what was deposited (if it was not withdrawn over the last couple days) is the open question that startups and their VCs are working through this very morning. This link from the FDIC is the primary source to read on what the FDIC is doing — read this FIRST. “Insured deposit” means a company’s deposits (up to) $250K. “Uninsured deposits” mean everything there and above for which the FDIC on its site says “Uninsured depositors will receive a receivership certificate for the remaining amount of their uninsured funds”. (Note this was written before the announcement Sunday evening of the Bank Term Funding Program)

8. What’s a “receivership certificate”? It’s basically a piece of paper that says “we need to figure this all out and when we do you may get some or all of that uninsured deposit.” How much, and when, is TBD.

9. So, for many startups, especially if they had all their money in SVB (i.e., all of that $5M check they got from a VC which originally came from the LPs), and had withdrawn nothing prior to the run, the only money they can count on is the $250K (“All depositors will have full access to their insured deposits no later than Monday morning, March 13, 2023”)

10. If over the weekend, someone acquires (had acquired) SVB, this situation with the deposits may end up remedied and the entrepreneurs will have access to all of their deposits (e.g., the full $5M in the hypothetical above). Of course, there are still the SVB employees, many of whom, depending on how an acquisition goes, may be out of job. And depending on how the acquisition is done, those who own stock in SVB (technically its’ parent org, SVB Financial Group), which was traded on the NASDAQ, may lose some or all of their money. Nonetheless, there is general consensus that someone buying some or all of SVB over the weekend is the best case (again, I’m simplifying a lot here and I’m not getting into creditor priority in liquidation which I’ll let my former colleagues from Deloitte and KPMG opine on.)

10. In the short term the issue facing startup CEOs is (was) having funds to make payroll and continuing to run their business (and this, the payroll and jobs implications, was why there was so much pressure for action over the weekend). There are reports of companies that had planned layoffs and now can’t do those layoffs as they don’t have access to the funds to pay severance event — a real chicken and egg. This puts these startups, and their employees, in an extremely precarious situation. The afore mentioned receivership certificates may end up converting back to 100% of the original deposit, but in the short term a receivership certificate cannot be used to make payroll (unless someone is willing to accept these certificates as collateral, which would assume the lender has a high degree of confidence things will be worked out. Again, the best hope is SVB gets bought).

11. So now there’s all these startups, not just in California, but around the US and world, that are stuck. Employees may not be able to get paid, which in turns means those individuals may not be able to pay their own mortgages, bills, etc. This might also accelerate the failure of these startups, leading to more tech layoffs, in turn impacting the wider economy. This also slows the innovation happening at these startups of course.

12. And on the other side of this stream of capital the LPs like universities are now rightfully concerned that their investments, which flowed through the VCs’ funds to startup CEOs/CFOs and were sitting as cash deposits in SVB, may have been lost, either wholly or in part (there are hedge funds bidding on them this weekend to the tune of 60 to 80 cents on the dollar). Again, greatly simplifying –> if an LP invested $10M in a VC fund, and figuring half the entrepreneurs used SVB, up to $5M of their investment may be in question right now. That was a high risk investment to begin with, but this is likely not the type of bad ending for which they accounted – to quote VC Bill Ackman in a Saturday Twitter space, “People (LPs) want to take the risk they intended to take.”

13. Meanwhile VCs (i.e., GPs and their teams) right now need to be focused almost exclusively on helping their portfolio figure this all out, and likely are not going to be able to dedicate much time or capital to new investments for new startups for the foreseeable future.

14. tl;dr is that given the amount of capital involved, the multiple institutions and industries involved, this whole situation could impact the larger economy. This is the “second and third order effects”.

Retirement Plan (LP) -> Venture Capital Fund -> GPs of VC firm write Series A check to entrepreneur -> check gets deposited into SVB -> SVB invests those funds into bonds -> Depositors need those funds back to run payroll but the value of the bonds has fallen such that there wasn’t enough to cover all deposit withdrawals needed to run payrolls to startup employees.

I have greatly simplified A LOT here. Again, target audience is not my friends in VC, tech, and investment banking but my family and friends back home in Maine.

I hope this was useful.

(Disclaimer again: I’m writing this as an individual and speak for no organization or company. This is also not investment advice and should not be construed as such. This is just for education purposes.)

Thoughts Ahead of the Q1 2023 FINOS Board Meeting

The following views are personal and based on my experience working in and around open source and the financial services industry the last five years. They do not represent the views of any employer or client of mine, past or present.

The Fintech Open Source Foundation (FINOS) quarterly board meeting is this week.

Below are six suggestions for the board, FINOS community, and wider community of developers, product managers, program leads, etc. working at the intersection of financial services and open source.

They are:

More detail about each of these is below if you’d like to grab a cup of coffee and dig in …

Identify Strategic Themes and Double-Down on a Critical Few Set of Projects

FINOS is an industry (i.e., financial services) focused foundation. While there are open source projects that have grown out of financial services companies to have broad cross-industry applicability – pandas, developed by Wes McKinney when he was at AQR and which he continues to maintain today, and Eclipse Collections, originally created by Don Raab, now at BNY Mellon, being two such examples – a principle utility of FINOS is the ability to connect product teams working on common industry-specific use cases to collaborate through open source methods and code.

I recommend that the board deliberate to identify a targeted set of themes – 3 would be ideal, 5 at most – of shared strategic criticality and which represent differentiated sets of use cases within financial services. This would be a fundamentally much more lightweight structure than the old programs model from the FINOS early days or even the categorization on the landscape. Rather this would be more a statement of a few top priority areas, for which open source has utility, that resonate to the CIO/CTO level of FINOS members.

Ideally, in each of these areas there should be 1-2 FINOS projects already incubating or active, around which the board and wider community would further rally (more on that in a second). In the case of themes where FINOS does not have a current offering, rather than start something from scratch, with an empty repo, which has had mixed success, I’d suggest looking for other projects in the Linux Foundation and wider open source community around upon which to build financial services specific extensions and features.

Here are some candidate themes, and potential related flagship projects

  • Big Data, Data Engineering, and AI/ML
    • Legend
  • Next-generation Financial Desktops
    • Perspective
    • FDC3
  • Trading Technologies, Front to Back, and (System-to-system) interop/exchange
    • Financial Objects and security reference data projects
    • ISDA CDM work
    • Morphir
    • MessageML, Symphony WDK, and similar projects
  • Payments
  • Blockchain and Tokenization
    While there is some applause for the relative lack of blockchain activity happening in FINOS, there is also lots of interesting blockchain work happening right now in the industry around areas such as fixed income issuance and settlement. CEOs of companies like BlackRock are talking publicly about tokenization’s utility. FINOS took a run at a Distributed Ledger Technology (DLT) program in 2018 and it didn’t quite get traction, but the core technology has developed a lot since then, and it now has more in-production use cases, including and especially in financial services.
  • Regulatory and Reporting
    How can open source and open standards help reduce the regulatory burden for financial services organizations, both financial reporting as well as reporting related to SBOMs, vulnerabilities, licenses, etc. for IT and engineering audits? Enterprises can and should think about how they streamline and standardize how they do financial reporting and engineering/SBOM/tech stack risk reporting for regulators and auditors.

As to the afore mentioned “rally”, this would take the form of:

  • The original contributors and/or current maintainers, with support from FINOS and the larger Linux Foundation as needed, re-commit resources to what are effectively product marketing roadmaps to further attract and retain consumers and contributors to these projects. Get the hygiene around these projects in great shape — documentation, public roadmaps, roadshow and conference presentation plans, support and mailing list response procedures, SDKs and reference implementations, etc.
  • In turn, and with an eye towards cross-pollination, each organization in FINOS should “pinky promise” to evaluate at least 3-4 projects in the FINOS catalog, with a goal that every member is consuming at least one project in FINOS that their own organization did not originally contribute. If this proves too hard – if each member can’t find at least one FINOS project that is useful enough for to them to implement and use – than a more fundamental question about the FINOS project portfolio, or even the value of FINOS hosting projects on its own at at all, should be asked, though I don’t think that’s where things are yet. As stated so well by friend of FINOS Jono Bacon Monday on LinkedIn, building a community around a software product (or catalog of projects) starts first with shared consumption.

The overall point is that we need to jumpstart the flywheel of cross-contribution via cross-consumption. This was a priority in FINOS early days but harder to do then with comparatively fewer members and projects. Now, under Gab and rest of the team’s leadership, FINOS has a broader set of both members and projects from which to build cross-interest in each other’s projects.

Build an Open Source Catalog to Demonstrate Commercial Value

Financial services professionals working to build great products in banks and other financial services organizations, but with with less background in open source, could use some help navigating what open source projects can be used for what and how. Building on the success of the project expo at OSFF, I think there would be value in a library of case studies and examples of how open source projects, at least those beyond the “household names” like Linux and K8S, have been put to use within financial services. Unlike the expo though, this catalog should not be limited to just FINOS or even Linux Foundation projects, but instead be any open source project that a financial services firm might use. Extra points if the case studies can include how consumers went on to become contributors.

Another way to implement this might not be as a catalog, which requires an initial critical mass of projects, but perhaps as a Yelp style project review where open source project consumers could easily share back their experiences and lessons learned when deploying a particular open source component, perhaps as overlay to an initial set of project data from as sources such as libraries.io.

Open source should be a way for financial services companies to accelerate product delivery roadmaps. A better way to share the pros and cons of open source packages, especially in a financial services context, would help product managers and engineers especially to make informed decisions, and in turn help organizations realize more commercial value from open source.

Convene a Working Group around AI

Everyone has heard about ChatGPT and if your social feeds are like mine, they are filled with posts about A/I.

I think the board should consider starting an AI/ML working group to take on the following topics:

  • Financial services specific considerations (e.g., IP considerations) when using AI-powered IDEs like GitHub co-pilot and ChatGPT infused repl.it.
  • Open source licensing and its applicability (or lack thereof) to AI/ML. To highlight just one specific issue, among several, what counts as “modification” in an ML model? Is setting the weights in neural networks enough to trigger the modification criteria if an ML model’s maintainers have adopted AGPL as its license (as some have done on Hugging Face)? It’s not clear that using open source licenses for ML models may not be a round peg in a square hole.
  • Underlying license and copyright issues related to underlying code bases and data sets on which AI/ML models are trained. Tools like The Stack built by bigcode on Hugging Face that allow one to search a code corpus for their own contributions are the types of tools and transparency of which we need more.
  • How tools like ChatGPT can be used to quickly build initial scaffolds and working prototypes of new projects for financial services, and what additional data sets could be used in combination.

Bring Product Managers to the Table

I am inclined to wonder if participation – i.e., having “a seat at the table” to the distributed product management at the core of open source that happens in GitHub issues and on working group calls – may not be at least as valuable, if not more so, to financial services companies than code contribution by engineers.

I think the community can and should do more to encourage participation by product managers in FINOS and similar efforts, as it is the product managers that likely have the clearest and most comprehensive view of end-user use cases. Product managers are well positioned to provide input on requirements, roadmaps, and features. Along these lines, there also opportunities to help bridge the gap between internal product management tools and processes with their corollary systems in open source communities (e.g., GitHub Issues coupled with a project’s governance model).

Build Business Cases, Benchmarks, and KPIs

Can more be done by open source practitioners, and the open source community overall, to shore up the business case for open source? (By “open source” I mean everything above and beyond passive open source consumption — i.e., “active” consumption by leveraging tooling like the OpenSSF criticality score, referenced below; participation by product mangers and engineers in working groups; code contribution; and, financial sponsorship in the form of foundation memberships and project grants).

Unless corporate leadership can be shown EDITDA (or FCF) measurable IRR and ROI models to rationalize investments in open source (as defined above), I think open source may increasingly find itself buffeted by the waves of economic cycles, especially as technologies like AI/ML (which is not mutually exclusive to open source, but may end up treated as a distinct set of programs for corporate planning purposes) become the new hotness attracting attention, sponsorship, and budget dollars.

And so, given there is only so much budget to go around even in the most well capitalized of corporations, organizations pursuing open source strategies could use help with:

  • Business model templates, preferably as IRR models
  • Categorized lists of concrete, tangible business benefits, preferably those that other companies publicly acknowledge having realized. (These cannot be theoretical or aspirational).
  • Guidance on how to set targets and key results goals.

Along the lines of benchmarks, and as I’ve shared in TODO Group and FINOS Open Source Readiness meetings previously, the industry and open source community could benefit from a set of common KPIs, specifically a scorecard with shared definitions with which an organization can benchmark itself to 1) its sector (e.g., capital markets) and competitors/peers, 2) the financial services industry overall (retail banking, capital markets, asset managers, data providers, etc), and 3) open source overall (for which the Open Source Contributor Index is effectively one version). I suggested a few such KPIs in a GitHub issue several months ago. Compiling these benchmarks might be done as part of the Open Source Maturity Model (benchmarks and measures being useful context to maturity rubrics) and/or State of Open Source in Financial Services annual report. I’m hopeful the CHAOSS community could be a huge help here too.

Here’s a starting point of a few KPIs (some of which, at least in part, exist in part on LFX Insights) that I think could be the basis for a useful benchmarks. Trend lines – Month over Month, Quarter over Quarter – coupled with the ability to compare one’s own organization with its sector, industry, and open source ecosystem overall, is what would make these even more useful to executives.

  • Active Contributor Ratio
    Active Contributors / Total Engineers (or potential contributors overall to include PMs, etc) in a given time period

    The first step to using this metric is a shared definition. One definition is the number of individuals who have proposed a pull request (which is a bundle of 1 to n many commits) in a given period. The Open Source Contributor Index (OSCI) by EPAM, by contrast, uses >10 discrete commits in a given time period.

    (Fictional) Example: A global asset manager has 15,000 engineers and 5,000 PMs, so a total addressable “market” of contributors of 20,000. 1,000 are active contributors to open source. Its active contribution rate is 5%.

  • Count of contributions in a time period made by a given organization, sector, industry
    # of contributions

    Similarly the first step here is to define what counts as contribution. Is it just code contributions? Or other valuable forms of contribution like raising a GitHub issue?

    Useful drill downs might include:
    • Products/projects to which contributions are being made by company, sector, industry
    • Foundations that host the projects to which contributions are being made
    • Technology or domain areas of contributions (E.g., data management, blockchain)
    • Use cases and business domains

  • Count of projects to which an organization, the sector, and the industry are actively contributing
    # of projects

    Potential drill downs:
    • % of projects to which contribution is being made that are in the Linux Foundation, FINOS, etc. Perhaps a pie chart of foundations that house the projects to which contribution is being made?
    • Ratio of number of projects to which an organization, sector, and industry is actively contributing that they also use in production over the total number of open source projects they use. This ratio would be useful to show how well aligned contribution is to overall open source consumption. This ratio could be further tweaked to include in the denominator just, say, the top 500 most used projects, which then help shows alignment of contribution to projects most used.
      (Total projects to which contribution is made – Projects to which contribution is made but that are not presently consumed in production) / Total open source projects/packages used in production
  • % of pull requests to a FINOS (or any open source) project during a time period
    • that are made by the original contributing organization (e.g., % of pull requests made to Waltz by DB)
    • that are made by other FINOS members other than the original contributing organization (e.g,. % of pull requests made to Waltz by any other FINOS member than DB)
    • that are made by other LF members other than the original contributing organizations
    • that are made by contributors from non-financial services organizations
  • Watcher, Star, and Fork counts across ..
    • Projects contributed (created and originated) …
      • by an org (e.g., Perspective, Quorum etc. for JPMC; GitProxy, Datahub, etc. for Citi, etc.)
      • by a sector (e.g., asset management)
      • by the industry
    • Projects to which an organization, sector, industry contributes
    • Projects an organization, sector, and industry consumes

While I doubt individual corporate performance will ever be public for, say, Morgan Stanley, to do a direct comp of any of these contribution metrics with, say, JPMC, I think it’s reasonable to expect executives who fund open source programs and associated foundation memberships to ask, and be able to get coherent answers to, questions which require industry context like:

  • Are we contributing at a higher or lower rate than the sector, industry, and overall cross-industry average?
  • How does our contribution activity overlap to the projects we most consume? How about for the sector and industry? (see discussion of interventions below)
  • How does our consumption and contribution map to the foundations to which we provide financial support?
  • How much contribution (traction) are we getting to projects we contributed from other FINOS members and wider industry participants? How about from top clients?
  • Where are other industry participants, especially our clients, focusing their contribution activity?
  • and an overarching, project discovery question, What open source projects are not on my radar that we should be looking at?

Just having a canonical top 10 or top 100 list of the open source packages most consumed by the financial services industry could be useful.

Bolster Connection Between Open Source Programs and Software Security and Supply Chain Programs

The criticality of open source package vulnerability detection and mitigation in open source continues to grow. Hence the 2021 White House Executive Order and the creation of OpenSSF.

As was suggested on Twitter Monday there can and should be more connective tissue between OpenSSF and the TODO Group, the latter being an incredible consortia of OSPOs in the Linux Foundation led by Ana Jimenez through which leading practices are shared about and among open source programs. I’d add FINOS to the mix. Why? Because open source programs, including and especially in financial services, should be well connected and supportive of software supply chain initiatives usually driven out of some combination of the CISO org, DevX, and CI/CD type groups. Additionally financial service firms have industry and regulatory specific requirements related to handling software security and performing incident disclosure.

The most concrete tie-in between open source consumption and usage security (“inbound”) with open source programs, which are often focused on contribution (“outbound”), is project and community health. Project health checks – which can include metrics such as PR review cycle time – are a useful early warning light that an open source project may have a statistically significant greater chance of containing heretofore unidentified critical (9.0+) vulnerabilities on the NVD scale. Recognizing their value in risk identification, project and community health metrics are being incorporated into the OpenSSF Criticality Score.

In addition to helping software security teams in banks to implement the OpenSSF criticality score among other forms of project health check reporting, open source program professionals are also well positioned to advise on the potential interventions a company might take when a particular project, especially one that’s commonly used or prevalent across transitive dependencies, starts to exceed specified risk thresholds. These interventions might include the following non-mutually exclusive actions:

  • New or increased financial support for an at-risk open source project through
    • Support of the underlying foundation if they are part of one (especially via directed giving if that is an option)
    • Maintainer grants
  • Increased code contribution by the firm’s own teams
  • Hiring independent developers, perhaps via programs such as Major League Hacking, to build new features and fix bugs in an identified at-risk project
  • Increased product direction and feedback by the firm’s own product managers and security professionals
  • Especially if a project is no longer actively maintained, or its maintainers are no longer responsive to PRs, hard fork the project
  • Evaluate alternatives, both open source and proprietary, that might take place of the at-risk project.

Evaluating the suitability and feasibility of these interventions is work open source programs should be well positioned to help CISO teams with, and an excellent way, in my view, for these programs to demonstrate further value.

Finally, providing all this information in an easily consumable way, with useful visualizations, to the CISO, CTO, and CIO levels of an organization, as well as any number of the business case metrics above, is itself a big area of improvement opportunity. For example, CISOs and CTOs should be able to readily call up a list of, say, the top 50 most used open source projects in their consolidated SBOM, with an overlay of 1) OpenSSF health score and 2) current enterprise engagement and support of these projects (i.e., the interventions above). Better still imagine bank leadership being able to call up such a dashboard in their preferred open source powered financial desktop of choice (perhaps with FDC3 integration with complementary tools) such that open source security reporting is a “first class citizen” among other executive level risk reporting, and as complement to existing FINOS-wide public visualizations available in LFX Insights.

Side note: Over the holiday, I got playing with an open source project in the Apache Software Foundation, DevLake, contributed by Merico, an open core company whose investors include OSS Capital, the VC firm that invests exclusively in open source companies; I think the DevLake project could provide some of the metrics and visualization building blocks. There’s also some great new stuff in the latest LFX Insights release. Lots to work from.

In conclusion …

I am really excited about all the great stuff happening in around FINOS, its incredible members, and the intersection of open source with financial services. I am still buzzing from the fantastic OSFF last month — some of us “old timers” (looking at you, Brad!) were remarking at how much the community has grown. Great stuff! Here’s to an awesome 2023!!

This post was not written by ChatGPT.

TTM Dinners

Later tonight, at a small restaurant in Park Slope, Brooklyn, I am hosting a dinner of ten. Some are old personal friends, others are former colleagues, and a few are people I’ve recently met. All are accomplished in their respective fields. Tonight’s gathering is the fourth such dinner I’ve put together since the first in nearby Carroll Gardens last January.

The genesis of the dinner idea came from my belief that interesting things happen when you bring together a cross-section of people of different backgrounds and industries, from organizations large and small — and that being at that intersection is a great place to be. As I’ve put together the previous dinners I’ve been asked “Is there a theme?”, “Is there an industry focus?”, “Is this a dinner where investors can meet founders?” The answer to all of these questions is “no.” It’s just dinner and conversation.

Now nicknamed “TTM Dinners”, those attending tonight represent a cross-section of industry sectors including edtech, investment banking, fintech, film, consumer packaged goods, venture capital, and technology. At previous dinners, we’ve had leaders and founders from other industries such a real estate, adtech, academia, public sector, and philanthropy.

While my intent is to keep diverse the industry sectors of the attendees, the dinners have been weighted somewhat towards media, education, financial services, and the technology startup community (i.e., founders and VCs). That’s probably a reflection of both the industries in which I make a living as well as the commercial dynamics of New York City. What does concern me a bit is that the dinners have been sometimes too male, too white, too straight, too economically privileged, and perhaps too comfortable in a narrow center-left lane of political opinion. I have work to do to make the dinners authentically inclusive of different perspectives and backgrounds while also avoiding the appearance (or reality) of tokenism.

What is central to the dinners, and I think non-negotiable, is Brooklyn. I love my adopted borough of Brooklyn as much as I do my home state of Maine. Going back to my years at Deloitte and the work I led there with Silicon Valley Bank, NASDAQ, and Cooley to create the Digital Media Center, I have been excited about the idea of encouraging the business community to embrace Brooklyn as someplace where great ideas are incubated in a one-of-a-kind setting. I remember the excitement (and relief) I felt in March 2011 when I saw Deloitte colleagues, and then TMT industry leaders, Phil Asmundson and Craig Wigginton arrive to the first Deloitte event I organized in Brooklyn with Digital Dumbo. It’s incredible how the perception of Brooklyn as not just a place to do business, but also as a place where innovation happens and new companies emerge, has changed in seven years. There is perhaps no better place for a dinner like this.

Over the course of the three dinners to date a core of regulars have emerged, including the technologist and philanthropist Paul Walker, filmmaker Steve Shainberg, and media entrepreneur Gregg Schoenberg, who help me organize the dinners and, especially, curate the guest lists. Each has been incredibly helpful in providing me candid feedback and concrete ideas.

I am excited about tonight’s dinner and the amazing people with whom I’ll break bread in a few hours. The next dinner will likely be in April or May. If you’d like to join, please let me know by sending me an email at rob(at)ttmadvisors.com.

Thoughts for the CSforAll Summit in St. Louis

The CSForAll Consortium Summit in St. Louis starts Monday. CodeBrooklyn, co-founded by Brooklyn Borough President Eric L. Adams and myself in 2015, is a consortium member. The mission of CodeBrooklyn is to raise awareness of computer science education among Brooklyn parents and schools and in turn connect school communities, especially those that are majority students of color or Title I, with funding for CS programs. Due to various client commitments I am unable to get to St. Louis. So here’s a few questions and thoughts I have regarding the CSForAll movement in the United States, with a heavy dose of NYC in particular, going into what looks to be a tremendous coming together of incredible educators, donors, advocates, and organizations. These topics and questions are organized as follows:


Should the “CS Ed Movement” Work with the Trump (and DeVos) Administration? And Will Different Choices Divide It?

It’s evident to many who work in CS education in the U.S. that some of the leadership of the national computer science education movement has divided into two camps. This division is largely, though not entirely, based on willingness to work with the current president, his education secretary, Betsy DeVos, and the administration overall.

The first “camp” is led by Code.org, which has made the calculation that despite concerns they might have about other Trump administration policies (including and notably the “Muslim Ban”; Code.org’s founders, twins Hadi and Ali Partovi, are immigrants from Iran) there is more to be gained by moving forward and working with the current administration.

The result of this collaboration was the September 25 White House announcement of the Trump administration’s intent to “establish a goal of devoting at least $200 million per year in grant funds towards … high-quality STEM and Computer Science education”). This was followed a day later in Detroit by First Daughter and Advisor to the President Ivanka Trump’s announced commitment of $300 million in new private sector funding for computer science education by Internet Association members that have also chosen to be partners with the Trump administration on computer science education. These White House partners include Amazon, Google, Microsoft, General Motors, and Salesforce.com. (How much of both the public and private funding is actually “net new” vs. money previously pledged and/or being shifted from other important federal purposes is a separate, though crucial, question to be asked and is as of now unanswered).

The other camp is essentially the CSForAll Consortium — or, more correctly, several of its most visible and well-known leaders and supporters. Several of these people were previously members of Obama White House Office of Science and Technology that led that administration’s development of CS policy. In particular, they were behind President’s Obama’s January 2016 CSforAll proposal, the center piece of which was to be $4 billion in federal funding (Disclaimer: In December 2015 I attended one of a series of White House CS For All meetings; I was also consulted by the Hillary Clinton campaign regarding the development of her CS education platform). The position of many in this group is perhaps best represented in Reshma Saujani’s powerful op-ed in the NY Times last month about her decision that she and the organization she started, Girls Who Code, would not participate in the new Trump White House CS Education announcement.

As context, I supported Hillary Clinton in the primaries and voted for her in the general election. I am a registered Democrat; for a couple years I have represented the blocks, the 64th Electoral District, around my house in Park Slope, Brooklyn on the Kings County Committee.

But I also grew up in Maine – Kennebunkport specifically. Maine is a notoriously independent state and being from Kennebunkport I couldn’t avoid meeting a Republican or two. I have a friend who worked on the transition team and now has a senior role in the Trump administration with the Department of Homeland Security.

My first inclination immediately after the election was to give the President-Elect and his team time to develop and announce their CS education policies (unlike Hillary Clinton, President Trump’s campaign did not have a CS education policy plank) – but to also retain the option to change my mind if circumstances warranted.

In November I argued for this view – “let’s give them a chance” – on a Facebook forum for those involved in the CS movement. Many argued against me, taking a position similar to Reshma’s view expressed last month in the Times. They viewed any work with the Trump administration as inherently amoral. Undaunted, I created a Change.org petition asking the incoming administration to support the CSForAll program. I shared my petition by email on December 1, 2016 to a number of the people I know who are active in the CS education movement, a number of whom will be in St. Louis. Organizations I respect and admire like Code.org and Microsoft have made the decision that partnering with President Trump, his daughter, and Secretary DeVos in order to further fund and expand computer science education is worth that risk.

My larger concern now is what this means for the CS education movement, both now and down the road. While much of this is political inside baseball probably irrelevant and invisible to the teacher in her/his CS classroom (and, at the end of the day, that’s really all that matters), I worry that the movement further fractures based on decisions to work – or not – with Donald Trump.

“Rigor for All”: We Must Avoid Watering Down (and Creating Two Tiers of) Curriculum and Lessons, Especially Based on Socio-Economics

Last month I attended the first joint CSNYC/CSTA-NYC Meetup – the previous CSNYC Meetups have been reconstituted as the NYC chapter of CSTA. Through my employer, TTM Advisors, I’ve done work with both CSNYC and CSTA and admire both organizations. The Meetup was also a celebration of Cornell Tech’s new Roosevelt Island campus. Diane Levitt, Director of K-12 Education at Cornell Tech and Michael Preston, Executive Director of CSNYC, were co-hosts; these two people have worked tirelessly to expand computer science in New York City for many years.

While the event was good, and the company even better, I was disappointed by an exemplar video the NYC Department of Education CS4All team chose to highlight as an example of the computer science lessons both NYS/NYC taxpayers and the funders of CSNYC are paying to have developed. Mike Zamansky, one of the most esteemed computer science teachers in the city, and who until last year led the CS department at Stuyvesant, was similarly disappointed in the video and its showcased lesson.

The most celebrated attribute of the lesson featured in the video appears to be that it was “culturally responsive.” But I am not really sure just how “culturally responsive” what’s shown in that video really is. I am not really a big fan of the whole cultural appropriation critique popular now, especially on college campuses, but if I could ever sympathize with that view a bit, it’s in this video.

Throughout the world drumming has a tremendous spiritual tradition. Drumming is also an important way of communicating stories and traditions across generations. I am not sure students rote memorizing and reciting HTML does that tradition the respect it deserves. Moreover the children in the video looked, like many NYC schools(though, sadly, hardly all – more on that in a bit), pretty diverse. To which children’s culture in that video was the lesson seeking to respond? I couldn’t help wonder what the reaction might be to a video of Korean- and Chinese-American students in Flushing reciting the periodic table to Japanese Taiko drums in order to construct a “culturally responsive” chemistry class.

My larger concern is the contents of the lesson itself. Simply put, memorization and recitation of an HTML page, even if done to the beat of a drummer, is just not computer science education from what I’ve seen. Is there any proof, anywhere, that having students rote learn HTML contents of a web page has any pedagogical value?

I am guessing someone decided to take this approach because 1) they wanted to do something “culturally responsive”, 2) they wanted to introduce kinesthetic learning (not uncontroversial; sometimes connected to efforts to be “culturally responsive” which is one source of the controversy), and/or 3) drum class was the only class during which the principal could find a way to do computer science.

But in the video the drum teacher is clear in his own admission that he doesn’t understand the semantic meaning of the words (the HTML code) he’s teaching the kids to say. This is remarkably different from the traditional cultural role of drumming and chanting in the first place — the words matter tremendously, whether it’s invoking a God or transmitting a traditional story from generation to generation. The person leading and drumming would definitely know the meaning of the words and likely have a hand in their interpretation.

And is there even an opportunity for the kids to learn the semantic meaning of, for example, “a href”? We never find out. It feels like going to Latin Mass and reciting the Gospel but not having any idea what you are saying because you don’t read Latin.

Moreover this video is now online as a featured exemplar for other school communities around the city. Do we want other schools, principals, parents, teachers still getting their head around what “CS4All” even means to see this video and think “Oh, so that’s what computer science education is?” I’d argue “No.” I cringe thinking that school communities I work with in Brooklyn and around the city might see that lesson and conclude that’s what computer science education is.

Another concern I had at the Meetup was that the DOE CS4All team member made the statement that “We wanted to make sure teachers were not grading students based on their code and whether it was correct or not… based on wanting them to grade students on their process and how they got there.”

First of all, this is a terrible message to send to kids. It’s the epitome of “everyone gets a trophy”. It puts effort above results. The world is a brutal place and just doesn’t work that way.

Perhaps in the affluent, largely white, ostensibly “progressive” neighborhoods like the Upper West Side and (sadly) my own, Park Slope, many children, through the trust funds and inherited wealth of their parents, can be safely insulated from the real world of accountability, testing, and objectively right and wrong answers. But outside of these neighborhoods and a particular cohort of parents (the same general group of parents who in New York City are fond of attending public forms to clamor for integrated schools but whose own, quite different, personal choices to self-segregate their own kids contributes to making NYS and NYC home to the most segregated schools in the country; I have seen this issue first hand when I was an elected member of CEC 13 and had to vote on the contentious PS 8/307 rezoning) the rest of us study, work, and live in a world where we are measured not by effort but results. And I’m guessing that more than a few of the children in the exemplar drumming video are not the beneficiaries of the socio-economic privilege to “opt-out” of accountability, testing, or jobs with performance reviews based on objective criteria (results).

Moreover, it must be noted that correctness is a very real thing in computer science. The correctness of code is a critical concept within computer science theory. Programs can be proved correct using formal verification techniques. Code can be evaluated by things like its readability by other developers, its extensibility, and whether it makes appropriate use of abstraction.

Software can be be further evaluated on its ability to fully and wholly solve the problems it was designed to solve while additionally creating no secondary effects, including and especially negatives ones — i.e., adherence to business requirements and (successful) acceptance testing.

Code can also be evaluated – objectively – on the time complexity of its component algorithms (Big O notation).

And while it’s true that a program might be written in, say, 100 different ways, so long as the requirements are met, functions produce expected return values in unit tests, etc., the DOE team member’s assertion didn’t even reach that bare minimum. Getting the right answer apparently matters not at all.

More broadly this pedagogical approach, advocated by some in the education community – the one of process over answers; the “we just need to teach kids how to think” approach that devalues the role of objective fact in schooling, I believe has contributed to the rise of our corrosive “fakenews” political climate and portions of our citizenry seemingly challenged by the need to differentiate fact from fable.

The assertion that “We wanted to make sure teachers were not grading students based on their code and whether it was correct or not” is a perversion of the originally and legitimately valid concern about the time spent on testing and rote learning. That twisting has taken us to a new place where people stand up to assert we do not want to evaluate students work for objectively right or wrong answers. Right answers do exist in computer science and it is a misrepresentation to set the expectation with students that computer science – or the real world – is just “Free to Be You and Me“. It’s not.

Beyond the concerns I had at last month’s Meetup, I have another concern: paper based computing activities and their use in economically disadvantaged school communities.

As context, many key concepts of computer science can be taught off line – i.e., on paper. For example, the summer professional development for TEALS volunteer teachers shows how one can demonstrate algorithms like insertion sort and merge sort using student participants and pen and paper. Moreover, nearly every teacher I know has a backup plan or two for any online lesson in the event of network or laptop problems (which happens a lot).

But increasingly, especially in NYC, I’m seeing what is a “hands in the air” approach to infrastructure issues that schools may encounter with laptops and networks, and which are major obstacles to online computer science instruction. The effective message is this – if your kids’ school (and the parents at that school) are not affluent enough to buy a bunch of laptops that actually work and/or politically connected enough to lobby your city council person for “Reso A” funding to get infrastructure upgrades, these school communities are expected to get by with paper based off line lessons for computer science. This is two tier education and in conflict with “For All.”

I think it’s unfortunate that the interdependence between the success of the ten year DOE CS4All roll out in NYC and ongoing upgrades to school LANs and the DOE WAN does not formally exist, though is obvious to everyone involved nonetheless. I know the CS4All NYC DOE team – who work their butts off for our city (let me clear: my constructive criticism comes from a place of love, not disdain), has no control over the NYC DITT schedule of projects. But that doesn’t make disconnecting these work efforts within the DOE the right approach.

I know fairly well the laptop and network situation on the ground in NYC schools. Last year I contributed, through my work on Code Brooklyn, to the Technology Survey Report developed by the Brooklyn Borough Presidents Office which was released during last year’s CS Ed week. Right now, through my work with TTM Advisors, I’m doing a project for the Heckscher Foundation for Children where I’m visiting economically disadvantaged school communities (i.e., Title I) around the city, especially in the Bronx and Brooklyn, to assess the root causes of network, both LAN and WAN, poor performance.

The headline is that public schools of privilege, which are generally schools with majority white populations in Manhattan, “Brownstone Brooklyn” and pockets of Queens – the school communities that can raise a million dollar PTA fund and pull political strings as needed to get still more funding, have the best hardware and networks in the city. The rich, as is all too common in our nation, especially in areas that self identify as “liberal” and “progressive”, get richer.

So while affluent white children learn to code on shiny new laptops in Tribeca on fast Meraki networks, children of color in Brownsville are expected to get by learning computer science on paper, if they have any computer science at all. That isn’t right. The answer to crappy laptops and slow networks can’t be continue the “let them eat cake” approach of telling them to do computer science offline on paper. The answer must be to get those schools new laptops and fast, reliable networks. To be clear, I know first hand that the DIIT is working really hard to address this; New York State’s slowness in dispersing Smart Schools Bond Act money hasn’t helped matters either. But, again, effort is not the same as results and we must make sure ALL of our schools have the laptops and networks they need in order to teach computer science.

One final note on the topic of have and have not school communities: if you’re involved in the computer science movement, regardless of your role or where you live, and you’re not getting to a school that is either Title I and/or majority students of color, once a month and at least once a quarter, you risk falling of touch with the school communities that deserve the most attention and focus. These are the school communities most critical to our nation realizing the “for all” vision for computer science. It’s essential to spend time in schools, talking to students, parents, teachers, and administrators (though, if you do visit a school, please do not do as one global social media company did and spend the majority of your time at a school located in in the Howard Houses public housing in Brooklyn, and at which many students’ families have daily concerns about basic food security, talking about how great the free corporate cafeterias are in Palo Alto. If you’re that well-off, you might consider donating a couple dollars to that school so it can buy a few new computers instead of bragging about your burrito and sushi bars).

We Can’t Forget the Rural Areas of the Country

With the exception of my work from 2015 to 2016 as the founding Executive Director of TeachCS, which was national in scope, most of my work within the CS movement has been limited to New York State, New York City, and the great borough of Brooklyn. But love Brooklyn though I do, my heart will always be in my home state of Maine. And so I when I was back home in July I was happy to read an op-ed in the state’s largest paper, the Maine Sunday Telegram/Portland Press Herald, that the state legislature had passed L.D 398, an act “to enact measures designed to ensure that schools are authorized and encouraged to consider courses taken by a student in computer science as demonstrating proficiency in science or mathematics.” The centerpiece of this legislation is the establishment of a K-12 Computer Science Task Force, chaired by Jason Judd, Project Director of Project>Login at Educate Maine, and on which Code.org is also a member.

I think we all know now all to well what can happen when affluent college educated professionals in coastal cities lose site of the real economic pain being experienced in much of rural America, especially among those whose jobs in manufacturing and other heavy industries have been lost. But given the nature of the CS movement – many, though not all, of its leaders, philanthropists, donors, and organizations are based in large urban centers on the east and west coast, it can be easy to forget and neglect the needs of rural America. We can’t let that happen.

Of course, it must be noted, a relatively rural state not entirely dissimilar to Maine in size, poverty level, and economic head winds – Arkansas, is arguably the national leader in statewide computer science education programs. This is in large part due to the tremendous work of Anthony Owens, Arkansas’s State Director of Computer Science. Arkansas has shown us that rural states in the middle of the country can’t just keep up, they can and will lead.

Some will argue that the answer is online learning. Some of the TEALS program delivery to rural states is done in that just model using remote TEALS volunteers and telepresence. And while as the former CTO of Relay Graduate School of Education I have certainly seen the potential and power of flipped classrooms, hybrid models, and online learning more generally, what’s needed most, whether it’s the south side of Chicago or Aroostook County in Maine is qualified, in person teachers in all schools, urban and rural.

It’s great that the CSForAll Consortium chose St. Louis, the gateway to the west and the geographic center of the country, for its host city. A critical part of “For All” is our more rural states and territories (let’s remember Puerto Rico and USV now more than ever!) and this choice of location sends an important message about the CSForAll Consortium’s commitment to those areas. (And if you’re interested in this topic, check out Garth Flint’s blog).

Our Choices of Languages and Paradigms Need Review

Imperative and object oriented (OO) programming, and languages in these paradigms like Java and Python still make up the lion’s share of computer science curricula. What gets short changed in these choices is functional programming and that’s unfortunate for a few reasons. First, the cross walks between math and computer science are arguably their strongest and most clear in functional programming. Second, by not seeing functional programming until years into their CS education students end up so used to both imperative and/or OO that FP seems like a huge leap, and it really shouldn’t (it’s worth noting that universities like Harvard teach functional programming in the second semester of their computer science program). Third, while FP has been around a long time (Alonzo Church, who developed the Lambda Calculus on which functional programming is based, was a professor to Alan Turing) it’s now seeing a huge resurgence in interest and usage, both academically and commercially.

We also need to talk about the appropriateness of block based coding in high school.

Our mission at TeachCS was to fund, via philanthropy from Tata, Microsoft, Google, and Quotidian Ventures, professional development for teachers to be trained in one of five National Science Foundation (NSF) funded courses, three of which were aligned to the new (and so far quite successful) AP Computer Science Principles framework. A key feature of AP Computer Science Principles is that, unlike AP Computer Science A, which is based on Java, AP Computer Science Principles is language independent.

Most of these new AP Principles courses, including ones like Beauty and Joy of Computing which TeachCS supported, use block based coding for their courses (BJC uses SNAP, which is based on Scratch).

As part of the Microsoft TEALS program, I taught what was essentially a beta version of AP Principles during the 2013-2014 school year using Beauty and Joy and SNAP at Uncommon High School in Brooklyn. Though BJC is thoughtful, rigorous, well designed course, my students were relentless in asking “when are we going to learn real coding”, by which they meant text based coding. Last week, when I was visiting Brooklyn Technical High School, one of the most selective high schools in New York City, the student tour guide described, unprompted, her AP Computer Science Principles class as “not real coding” because they “just used blocks.”

Platforms like Codesters (Disclaimer: Codesters, a computer science education company, is a client of TTM Advisors LLC, my employer) attempt to bridge that gap by allowing students to develop in a block based environment and then, when they’re ready, switch over to text-based (in Codesters’ case that’s Python). I think models like that, as well as curricula that simply are text based from the get go, need to get more consideration for both middle schools and, especially high schools. We can’t be afraid of text based coding. Students are telling me at least that they think block-based programming is “for little kids” and we must take that feedback into account.

We Need to Spread the Funding Around Better

Code.org and its founder, Hadi Partovi, have done a tremendous service to the CS community and this recent wave of enthusiasm for CS education nationwide. It’s signature event, the “Hour of Code” has like no campaign ever made a huge impact in areas of inclusion and participation not just in the United States, but around the world. CS Ed Week is still almost two months away and already overall 9,000 schools and communities around the world have committed to doing the “Hour of Code” this year.

As recognition of this great work, just last week Amazon announced a $10 million donation to Code.org. To put that in perspective, the total budget for the Computer Science Teachers Association is under $1 million a year. Amazon’s donation could have funded CSTA’s current operations for ten years.

I have not done the numbers, but my rough finger in the wind from having done fundraising myself is that over half of the philanthropy for CS education is going to Code.org and I’m guessing that’s more like 2/3 of the funding coming from the big tech companies like Facebook, Amazon, Microsoft, and Google.

This is a difficult topic to discuss but discuss it we must — Code.org’s incredible ability to raise philanthropy is making it difficult for other nonprofits in computer science (with a couple exceptions, like Girls Who Code, who also do a great job of marketing and fundraising) to make ends meet. Corporate Social Responsibility (CSR) departments are not known for being either well-resourced themselves or as risk takers — this makes a follow the herd model of simply giving money earmarked for computer science education to Code.org by default an easy choice for a haggard, over burdened CSR middle manager. And Code.org does do great work. But this present state of affairs doesn’t always make more for a vibrant, robust community of thriving non profits.

I also think it’s important to make room in CSForAll for curriculum developers, coding platforms, and professional development companies that are for profit (for purposes of this discussion I’m leaving out for profit K-12 schools; I personally can not yet support for profit K-12 schools). McGraw Hill is for profit. Pearson is for profit. Nearly every ed tech startup you can think of is probably a venture backed for profit – companies like Knewton and Code Academy. Organizations that sell products and services to schools and districts are usually for profits.

And yet, this current wave of computer science education momentum has largely been built on the assumption that the predominant business model for organizations involved in CS curriculum and professional development will be the 501(c)(3). Anyone who has run a non profit knows how difficult it is to raise money and keep going year after year raising ever more, at least until earned revenue can become a significant source of funding, which can take many years. And because of the afore mentioned dominance of Code.org in attracting CS philanthropy, this leaves the remainder of the CS nonprofits essentially in a constant state of fierce competition amongst themselves for what amounts to philanthropic table scraps. It also makes young nonprofits vulnerable — when the cornerstone donor of TeachCS for over 80% of our first year scholarships and programs operations decided mid year to instead shift its funding to a similar Donors Choose program, it effectively killed TeachCS and left me personally out tens of thousands of dollars in back salary I could no longer pay myself (thankfully the afore mentioned other four donors honored their commitment and we were able to still do a scaled back program).

Meanwhile there is a ton of cash out there in the form of venture capital, both institutional and individual. There are business opportunities in CS education and we have to stop being queasy about that as, again, organizations that sell to schools are usually profit generating businesses anyways. And so I’m happy to see for profits like CodeHS and VidCode as partners at the CSForAll Summit.

I’d also add that technology community – especially Google, Amazon, Facebook, IBM, Twitter, Microsoft, etc. – can do much more more in terms of philanthropy. I’d challenge the tech sector to increase their giving – both in the form of philanthropy and VC, to CS education organizations by 10x over the next 3 years, and that each company commit to supporting at least two other CS organizations other than code.org (again, this isn’t a knock on Code.org — but the CS education community will benefit from more diversification in philanthropy by well-heeled companies with valuations in the hundreds of billions of dollars).

And while I understand the impetus (and touch of arrogance) that pushes companies like Google (CS First), Apple (Swift Playgrounds), and Microsoft (MakeCode) to build their own CS programs, curricula, and platforms, the CS community, school communities, teachers, and students, would be better served by less fragmentation. While it’s wishful thinking, these companies could serve the community better to instead focus on financially supporting the curricula and platforms already out there, many with principal investigators and a decade or two head start on academic peer review and in-class evaluation.

Corporations are Going to Need CS Education Strategies

In 2000, while I was still living Silicon Valley, I joined KPMG Consulting LLC, then still part of KPMG LLP. Over the next six years, as the Enron scandal rocked the accounting and consulting community, I’d watch on as KPMG spun out KPMG Consulting, which would be rebranded BearingPoint. In 2006 I left BearingPoint and joined Deloitte Consulting LLP‘s corporate strategy practice, where I advised Fortune 500 tech and media companies on their channel, partner, and route-to-market strategies. Deloitte Consulting was (and is) still part of the overall Deloitte family of partnerships, the last of the group of Deloitte, KPMG, E&Y, PwC, etc to retain its original structure which includes both audit and its original consulting unit.

Since 2000 there has been tremendous change in the “Big 4” (then the “Big 5”). In 2000 the control and power within these firms lied with the audit and accounting partners. By the time I left Deloitte in January 2013 it was evident that consulting was on the ascent both in terms of consulting revenue as as a percentage of aggregate overall revenues from the US member firms (Audit, Tax, Consulting, and Financial Advisory) and internal influence. A year prior to my departure, in January 2012, Deloitte bought Ubermind, in turn creating Deloitte Digital, and moving fully into the market once traditionally dominated by the digital agencies. Deloitte, like most of these professional service firms, is not your grandfather’s white shoe accounting firm.

What does this all have to computer science education? Computer science is becoming a critical skill in a lot more careers than ever. 30 years ago it’s hard to imagine that firms like Deloitte or PwC would care much if a candidate knew how to program a computer. Now it’s an essential skill for many positions, both client facing and back office, in those companies. And as I argued last year in a guest post for Code.org, I think the traditional division between functional analysis work and software engineering is going to go away.

And so increasingly I think we’re going to see large organizations, and not just those in technology, developing explicit talent and philanthropic strategies for computer science. This will likely take the form of…

  • Increasing the share of philanthropic and other corporate giving to computer science (the afore mentioned overworked CSR manager is going to need help determining which organizations to support). In particular corporations are going to be inclined to want to support organizations that are helping students develop the skills that they are going to need most from prospective employees, whether that’s A/I, big data, cybersecurity, hardware design, or some other area of computer science.
  • Firms like Deloitte start formal recruiting in the junior year of students’ undergraduate program but they look even earlier to begin attracting and identifying top talent. Corporations are going to be looking for top computer science students and solid computer science programs in high schools as sources for prospective employees.
  • Computer science will be an ever more important part of talent development strategies and so leading companies are going to want to better engage with the computer science education community about best practices in teaching computer science.

What About the Boys?

This is perhaps my most controversial point. I don’t just think that – I know that – as I’ve shared the following view point with several prominent people in the CS Education movement and the reaction has been quite negative. But persist I will as I think this is important.

My son goes to a progressive, politically left-leaning, public middle school in the East Village of Manhattan. It does not yet offer computer science as part of the curriculum taught during the day – i.e., it’s a school that has yet to benefit from the NYC DOE CS4All program’s 10 year rollout.

However the school does offer a Girls Who Code after school club, which is open to only girls at the school.

Before I go any further, let me be clear: I think the world of Girls Who Code and its founder, Reshma Saujani and nothing here is meant as a critique of that program or her. There are also academically researched cases to be made for creating places for girls and young women to explore computer science on their own.

But I’d like to think that the intention of our inclusion efforts around computer science was not to end up with school communities where the de facto condition is that a student may not study computer science at school, though offered, just because he is a boy. 

In the case of my son’s school, fault lies not with GwC but with the school administration and DOE overall as well as an overall funding mechanism for schools in NYC – one that over relies on the affluence of parents to pay for core programs as if these public schools were in fact private – that can neither yet provide CS during the class day for my son nor fund an additional after school program for CS that would be open to girls and boys alike. This is not inclusion. This is not equity. This is not “For All”.

How did we get here? I think the CS movement has made a mistake by assuming autodidact models from our past when we ourselves were children. These are outdated, errant, and even racially and privileged based assumptions about how boys get exposed to computer science. These models include Gen X boys like me in the 1980s who huddled around Commodore 64s or Apple IIs programming Basic between bouts of Castle Wolfenstein, or more recently, archetypes like the young Mark Zuckerberg in the 1990s teaching himself to code basic web sites. These models are also premised largely on the life experiences of privileged, suburban, mostly white boys and are not representational of boys in their own full diversity.

Complicating this still more is that we – adults who can often remember command prompts and even DOS, do not always appreciate the degree to which programming has been abstracted away from the casual tech user. For many young people programming a computer today is no less foreign than the idea would have been in the 1970s to open up the television set to fiddle with its internal circuits. So even young autodidacts may not have the inclination to just start coding on their own. In other words, they need computer science to be explicitly offered and introduced to them, ideally by experienced teachers trained in peer reviewed, academically pressure tested, curricula, whether those children are boys or girls. We must stop assuming boys will just go off to learn to how to code on their own.

Having a strategy for boys need not, and should not, come at the the expense of the many crucial programs, like Girls Who Code and Black Girls Code, that exist to promote inclusion and equity with young women (and, I’d note that I’m also the father of a daughter). I’d argue that for boys the strategy is one based on using computer science to address the widening academic gap between boys and girls, which is now starting to manifest itself in the work place as well. Computer science could be a powerful way to help our boys from falling even further behind girls in the classroom.

To give an (hypothetical) example: if 10,000 students take AP-CSP year one and 7,000 are boys and 3,000 are girls, and then in year two that number is 5,000 boys and 5,000 girls, is that success? I say no, but I think there are some in our community who would say yes, that’s (some) progress. Again, these numbers are illustrative but I think the goal should not be just a shift in relative terms, but increases in absolute numbers, with the additional goal that we want girls’ participation to increase at a (much) faster rate than boys.

So in this example, success in my mind would be if in year two we grew male participation by 2,000 more to 9,000 (a 22% increase) and female participation by 6,000 to 9,000 as well (a 200% increase), thus increasingly the absolute number to 18,000 (an 80% overall increase) while also improving the ratio in the numerator and achieving gender parity. Isn’t THAT what we want? And yet I get the distrinct impression that for some the numerator ratio in relative terms is in fact the priority – that the first illustration where 2,000 less boys participate would, yes, be a form of success. I just can’t get behind that. And while this example is illustrative, the practical policy of my own son’s school seems to be just this in real life terms. CS is available to only one gender – girls. That changes the ratio, sure, but leaves my son behind in the process to achieve that goal.

Finally I’d observe that US civic society is not going to be a better place in 20 years if we have even more men wandering around with few to any skills relevant in the modern job market.

In Conclusion…

I have been but a bit, part-time contributor to several efforts to expand computer science. As mentioned above the bulk of my work has been limited to New York City and Brooklyn in particular. And while I taught CS in high school for a year as part of the TEALS program (and three years of English in Japan on JET) I am not a professional educator and do not posses a graduate degree in either education or computer science. It’s classroom teachers, especially in Title I schools, along with parents, whose view points are perhaps the most critical. And incredible national leaders like Jan Cuny and seasoned CS educators like Mike Zamansky know far more than I could ever about the best strategies to roll out CS nationwide and within individual schools. So take all that in consideration and as a disclaimer re my thoughts above.

I wish I could be in St. Louis. The line up looks incredible and there are a lot of smart people that will be in attendance. Have fun and do great stuff.

ReasonML and React

I’ve always loved programming. My first language was Basic on the Commodore 64. Later I picked up VBScript, VB, Javascript, ABAP, Ruby, and a bit of Java.

At Relay Graduate of Education, where I was CTO from 2013 to 2015, we used PHP, and specifically the Symfony framework. The conventions of Symfony coupled with its MVC pattern eliminate some, though not quite all, of the chaos and noise endemic to PHP.

I learned a lot at Relay. One important lesson I learned was that especially in smaller organizations like Relay CTOs really must allocate some of each week – I’d estimate 20% is about right – to hands-on development. CTOs shouldn’t expect they’ll be able to “out code” full time developers who spend all day coding. But for myriad reasons – not least of which is credibility – CTOs must be able to code in the languages and frameworks of the organizational tech stack.

I came to Relay right from Deloitte, and while my experience delivering large scale programs at Fortune 500 technology and media companies had taught me a lot, it had been a long time since I had done much hands-on development, and I had never developed in PHP anything but “Tinkertoy” practice projects. While reading our code, evaluating data structures, was never an issue, writing PHP code was not an area where I could lead by example. I was keenly aware of this deficiency. My team, I’m sure, picked up on my lack of sure footedness. I regretted this and I think I was a less effective leader as a result.

So after I left Relay, believing (as I do now) that those who will do best in this economy are those who are at once deep technologists AND deep strategists, I committed to filling what I thought were gaps in my formal computer science education.

Through my ongoing work in CS Education I had become aware of the popular CS50 MOOC from Harvard University offered through Coursera. But rather than take the free Coursera version, I elected to enroll directly through the Harvard University Extension School, which cost me a couple thousand dollars in tuition, but also earned me 4 graduate credits in CS and access to the professor and TAs like any other Harvard CS student. After successfully completing CS50 in May 2016, I decided to continue on and take CS61, a deep systems programming class in C and x64 Assembly in which I did projects like building my own shell and developing a virtual memory page allocation routine.

After CS61 I still had a taste for something more and decided to take CS51, “Abstraction and Design in Computation”, which probably could be just as aptly titled “Functional Programming in OCaml”. CS51 was a complete flip from CS61. CS61 was deep down in the guts of the computer, dealing with registers and low level I/O. CS51, by contrast, seemed in the ether of math and higher order functions. And OCaml presented a totally foreign syntax, at least at first.

But once I started to tune in, once I opened my mind to a new way of approaching coding, the combination of succinctness and expressiveness blew my socks off. Here, for example, is the idiomatic solution to Euclid’s algorithm in OCaml:

let rec gcd a = function
| 0 -> a
| b -> gcd b (a mod b);;

The essential idea in functional programming is that everything is an expression and expressions evaluate, via substitution rules in line with the Lambda calculus, to values. That made a ton of sense to me. So too did the did ideas like map, filter, reduce, immutable state, currying, etc. Perhaps most importantly my exposure to OCaml left me convinced that static typing is the way to go — as Yaron Minzky of Jane Street, the largest institutional user of OCaml (though Facebook is catching on fast) says, a whole class of errors just go out the window with a type system like OCaml’s.

Back to Relay for a moment – one of the last projects during my tenure was a bake off between the two most dominant Javascript frameworks, Angular and React. We ultimately chose Angular but I liked what I saw in React and kept abreast of it in the time since, developing some projects of my own using React. During that time React’s popularity has grown a ton.

So when as I was doing my final project in CS51, a meta-circular OCaml interpreter, and heard about ReasonML, “a new syntax and toolchain” for OCaml that makes OCaml a bit more accessible syntactically to developers coming from languages like Javascript, I was intrigued. But when I really got excited was when I learned it was some of the same team building ReasonML that work on React.  Thanks in large part to a transpiler from OCaml to Javascript developed at Bloomberg called Bucklescript, ReasonML is now a great way to build React applications as among numerous other benefits, ReasonML brings to React the full power of the OCaml type system. And as context, React was originally prototyped in a cousin and antecedent to OCaml, so this – React using OCaml (ReasonML) – is full circle and back to its roots for React.

There are numerous videos and tutorials out there about both ReasonML and using ReasonML with React (and React Native). If you’ve developed apps in React, I suggest you give it a try. There’s a learning curve at first for sure, but soon enough you’ll get the hang of it, and the OCaml type system coupled with capabilities like match (renamed “switch” in ReasonML) will make you a convert.

If you’re already pretty comfortable in FP, especially OCaml, and have some React in your bag, this YouTube video of a recent lecture by Jacob Bass is great. If you need a more gentle introduction, check out this ReasonReact tutorial by Jared Forsyth. Also check out the official ReasonML and ReasonReact documentation of course too. And the support from the ReasonML community on Discord is incredible.

ReasonML is still new and there are still rough spots here and there, especially when using it for production class systems. But it’s gaining momentum fast and has big companies behind it, along with a whole bunch of developers who know of course that Javascript is the lingua franca of the browser but would like to have an alternative with static typing for front end development in particular. ReasonML, which can be compiled down to Javascript via Bucklescript, is just that.

Here’s to Reason!

Helping with Harvey

For the last couple weeks I’ve been giving a bit of a hand to the civic hacking community in Houston that is playing a critical role in helping with the rescue and recovery efforts in Texas and has now turned its attention to helping prepare for Irma in Florida. Angela Shah of Xconomy did an article on this work that includes a quote from me about how this work related to some similar work I helped lead (NYTechResponds) in 2012 after Hurricane Sandy. 

The Fierce Urgency of Now

Earlier today I attended a stakeholder and planning meeting for “The Campus”, the “first technology and wellness hub at a public housing site in the United States.”

Our meeting was at the Howard Houses in Brownsville, Brooklyn. It’s in the Howard Houses that The Campus operates. Brownsville and public housing projects like the Howard Houses have been largely left behind by the surge of investment (and gentrification) in Brooklyn in the last 15-20 years. Brownsville still suffers today from high levels of crime, violence, and poverty.

A key goal of “The Campus” is to provide opportunity to young people, especially young men and women of color who live in public housing. It is hoped that through technology, especially computer science, as well as programs in entrepreneurship and wellness, that we can provide youth the hope, confidence, and career skills with which to turn lives and communities around.

Tragically for one man our work came too late. About twenty minutes before the start of our meeting Rysheen Ervin, 28, still with a whole life ahead of him, was shot immediately outside our meeting room and only a few more feet from a public school. The man died of his wounds. The shooting was witnessed by my friend State Senator Jesse Hamilton, sponsor of The Campus. Senator Hamilton recorded this powerful video immediately after the shooting. This violence had a deep impact on everyone in attendance, including me.

At last week’s CSForAll Summit at The White House a key theme was broadening participation and making sure the “For All” in CSForAll is not just a platitude. On Thursday Mayor Bill De Blasio will give his one year update on New York City’s CSForAll initiative. During his speech we can expect to hear much about the city efforts to keep the “For All” in the forefront.

To complement and magnify CSForAll and the work of its foundation partner CSNYC, Borough President Eric L. Adams (also a sponsor of The Campus), his staff, myself, and a number of non-profit and private sector partners, put together CodeBrooklyn last year. The purpose of the CodeBrooklyn campaign is to champion the expansion of computer science and STEM in our schools, especially in communities like Brownsville, with the goal of establishing computer science in every Brooklyn school in 7 years — 3 years ahead of the city target. We’re still in the early days, but last year were able to help get over 80% of Brooklyn schools to participate in Hour of Code.

Senator Hamilton is a key supporter of CodeBrooklyn. Senator Hamilton held one of the first hackathons in Brownsville last year. Another supporter of CodeBrooklyn is City Councilmember Laurie Cumbo, who at a CEC 13 meeting in October 2014 literally jumped onto the stage at PS 307 to join CSNYC board chair Fred Wilson to give impromptu, moving testimony about the civil rights case for computer science.

The fight for civil rights brings to mind Dr. King. The death of this man, Mr. Ervin, literally before the eyes of those gathered to plan for The Campus, gave new relevance to these words of Dr. King:

“We are now faced with the fact, my friends, that tomorrow is today. We are confronted with the fierce urgency of now. In this unfolding conundrum of life and history, there is such a thing as being too late. Procrastination is still the thief of time. Life often leaves us standing bare, naked, and dejected with a lost opportunity. The tide in the affairs of men does not remain at flood — it ebbs. We may cry out desperately for time to pause in her passage, but time is adamant to every plea and rushes on. Over the bleached bones and jumbled residues of numerous civilizations are written the pathetic words, ‘Too late.'”

For the man murdered today,  and his murderer perhaps as well, we were “too late.”

Let me be clear – computer science education is not a panacea to all of our nation’s problems. The challenges in communities like Brownsville – or McDowell County in West Virginia – are a Gordian knot that can not be swept away with a couple lines of JavaScript. But our commitment to inclusion and participation in computer science education is a right and important first step in creating new opportunity for communities that the economy has left behind.

And so let us resolve to act, in the memory of this man killed today at the Howard Houses, with Dr. King’s “fierce urgency of now”. Let us never be “too late” again.

Do the Right Thing (Thoughts on Middle School Admissions)

I am a parent of three children, serve on CEC 13, live in weird hybrid of NYC DOE Community School District 13 for elementary and D15 for middle, and serve as Youth Chair for CB6 which overlaps mostly D15 and a bit of D13. I am an aspiring policy wonk (though have a long way to go to reach Brad Lander levels). Of my three kids I have one in middle and two more on the way. I think about middle schools and admissions policy a lot, especially in the D13 and D15 parts of Brooklyn. With this as context…

A couple years ago when I’d say to a friend that this area of Brooklyn has among the very most segregated schools in the nation they’d look at me like I had three heads.

Now, a few years later, and after the contentious PS 8 / 307 rezoning, for which I was one of the votes (and which, I should note, was presented to the CEC by the DOE as about capacity but quickly became a segregation story in the media) the staggering Brooklyn school segregation is no longer new news. Google the topic and there are all sorts of stories. Here’s a few:

Last month Patrick Wall wrote an article for Chalkbeat that appeared in The Atlantic about District 15 middle school admissions – it’s a good and important read.

The one article that I keep coming to though is the one published in Vox last February, perhaps because it comes after the CEC and by extension me personally a bit because of our advocacy for MS One Brooklyn.

I keep coming back to it too mostly though, I think, because the narrative – maybe inadvertently on the author’s part – hits on one of the most troublesome part of this discussion, especially when it comes to middle school admissions in District 13 and 15. Which families, if any, are to blame? And what does it mean, to do the right thing, both for your child and the community at large?

I want you to imagine two families. Let’s for now leave race out of the picture. In both families, both partners work and both partners have masters degrees. In both families one spouse makes $125,000 and the other makes $175,000, so both families have a household income of $300,000. This may sound like a lot but in Brooklyn in 2016, this is really not that rare any more.

Both families have one 4 year old child and decide to move to Clinton Hill from Williamsburg to buy a home. The homes are, in fact, next door to each other and both cost $1.3 million. Even at $300,000 in household income affording a $1.3M home is a stretch, but they pull it off. The mortgage makes private school not an option, and both parents in both sets of families tell each other, perhaps somewhat unconvincingly, that their belief in the importance of public school matters more anyway.

Both families secure a seat for pre-K in an official privately run NYC DOE Pre-K center in Clinton. In the winter of their children’s pre-K year though, with Kindergarten approaching, the stories start to become different.

One family, we’ll call them “Family A”, has made friends with another couple, “Couple C” also in Clinton Hill. Couple C’s 3rd grader attends PS 321 in Park Slope and their oldest is a 6th grader at MS 51, also in the Slope.

Couple C insists to Family A that PS 321 is the best elementary school in Brooklyn, and “really the only option”. Family A responds that they are not zoned for 321 or even in its district (15). How can we get our child in, they ask. “Couple C” insists that “everyone finds a way in … everyone does it. Everyone gets their kids into 321” (including, it must be noted here, the parents of the author of the Vox piece by her own admission – “In 1990, I started kindergarten at PS 321 in Park Slope, Brooklyn, the borough’s best elementary school, because my family lied”).

Couple C further explains that “while you used to be able to just borrow an electric bill from a friend, the DOE has started to get more strict” and explains that the way most people do it is now is by renting an apartment in the PS 321 zone for a few months at the start of the kindergarten year, before moving back to Clinton Hill, as once a child is at a school they can stay there through graduation. While it feels like a financial stretch to Family A to pay for a few months of rent in center slope, figuring all told it’ll cost them about $12,000, it’s far less money than private school would have been for 6 years, which they had given up as an option when they bought their home. They decide to do as Couple C has suggested – they find a small studio in Center Slope, rent out their home in Clinton Hill via Airbnb for a few months, and enroll their child at 321 for the fall. Come the holidays they are back in Clinton Hill and getting used to the walk to 321 each day, which takes just under a half hour.

Back to “Family B”. They also have friends who have told them about different ways to get a seat at schools like 321 or 29 in District 15, as well as at PS 8. They’ve also visited PS 20, both Family A and Family B’s zoned school (remember, they are next door neighbors), and like what they see. They see a diverse community and like that. While they recognize PS 20 may not have a million dollar PTA budget like PS 321 does, they’re ready to “roll up their sleeves” and work hard to raise money for the school. They like that the school is in the neighborhood and an easy 3 minute walk from home. Off to PS 20 their child goes.

Let’s next fast forward 5 years. Our two families’ children are now ready for middle school.

Because Family A’s child went to a District 15 school, PS 321, Family’s A child is entitled to apply to District 15 middle schools, including and especially the “Big 3″ (447, 51, New Voices) cited in the Chalkbeat / Atlantic article. Family A’s child applies to 447, 51, and MS 88 and gets accepted at 447, the first choice of the family.

Family B’s child is still in District 13 and applies to middle schools there. Family B decides their top choices are MS 8 and Arts & Letters. These two schools have the highest test scores in District 13, and to Family B, who are thinking a lot about academics with high school and then college on the horizon, test scores matter. Moreover, having spent much of their past five years working bake sales nearly every weekend to raise money for PS 20′s PTA, they are ready for a school with a larger PTA budget, as is the case at both PS/MS 8 and Arts & Letters. PS 20 was great, but Family B also put in a lot of work, time, and commitment and don’t know if they can sustain the same pace of volunteering.

Family B recognizes though, that because both MS 8 and A&L give admissions preference to students continuing on from those school’s respective lower schools, Family B’s child’s chances of getting a seat at either MS 8 or A&L are low – perhaps even lower than the odds that the Family A student – still their next door neighbor – faced at 447 and 51. Family B decides to also apply to a citywide admissions district middle school in Manhattan a couple friends have recommended – it’s a stretch but the school has a reputation for solid academics and it’s another option to put in play. It’s a both a reach and backup plan in one.

As it turns out, come May of 5th grade, Family B’s child does not secure a seat at either A&L or MS 8. Instead the Family B child is offered by the DOE a seat at Unisons, AP Piller’s school, as the DOE algorithm will do when it can’t make a match to a preference listed on the application. Somewhat unexpectedly though, Family B’s child does get offered that seat at the Manhattan citywide public school. While they are worried about the commute to Manhattan every day, Family B decides on the Manhattan school over Unisons, largely citing the test scores and even more than the test scores that the Manhattan middle school places many of its graduates at Stuyvesant, Bronx Science, or Brooklyn Tech. Family B, like Family A, has every expectation their child will go to post-grad and they are starting to think about how to get from here to there.

So, at the end of 5th grade, Family A’s child leaves 321 for MS 447 in Boerum Hill; Family B’s child leaves PS 20 for a school in Manhattan. Come 6th grade, neither student is attending school in the neighborhood or even District 13.

These two scenarios are both very common and ones I drew up to protect anonymity rather than to create a work of fiction.

Let’s now fill in the picture a bit more on Family A and B and assume that in both Family A and B, both partners are white. And, at both of their new middle schools, the children will be in the majority racially.

Through their choices – by ultimately not choosing for their child Unisons or another D13 middle that is majority students of color, both Family A and Family B made more entrenched the segregation of our schools. Both families could have helped integrate Unisons, which AP Piller says in her piece is essential (”…(S)egregation is unacceptable. No amount of curriculum magic, or experienced teachers, or school choice, can overcome the fact that to overcome educational inequality, white students need to be in school with minority students”).

And then I wonder which family, A or B, AP Piller might fault more, if either, for their respective “contribution” to segregation and in turn the impact of their decision to Unison. Family A? Family B?

It’s Family B that was still in District 13 at 5th grade, and who would have seen the Unisons’ presentation at a PTA meeting. It was the child of Family B that was offered a seat at Unisons – a seat Family B turned down.

And Family A? They left D13 years ago. Do they get a pass? What culpability do they own? When they opted out of their zone school, PS 20, to pursue PS 321 at Kindergarten, they were effectively opting out of District 13 as well. Perhaps going into Kindergarten, middle school was the furthest thing from their mind. Or, just maybe, they were very aware (perhaps because Couple C told them) that not only would they get the benefits of 321 but also the privilege to apply to the “Big 3″. Remember too – because Family A still lived in D13 they’d have retained the option apply to Unisons. Do you think the Clinton Hill family with a 5th grader at 321, Family A, applied to Unisons? In my fictitious example, do you think they even visited or even thought about the school? What of MS 266, a District 13 in Park Slope they might have walked by every day on the way to 321. Was that ever on Family A’s list?

While Family A was enjoying the dividends accrued at 321 from privilege, money, and power, Family B was building up their local neighborhood school. Do Family B’s sweat equity and financial contributions, in the “calculus” of political correctness and “doing the right thing”, somehow balance out their not choosing Unisons for their child? Or should we nonetheless indict Family B for “selling out” on some idealized progressive vision for our schools and the choices that we think parents should unilaterally make in the name of the greater good? And if so, what then of Family A? What is their “culpability”? Should they have returned to D13 after PS 321 in order to “do the right thing” an help integrate Unisons?


Of course, this all isn’t really about AP Piller, a dedicated educator here in our district, or her thoughtful piece in Vox. This isn’t about our “fictional” families or the families you and I might know that are a lot like them.

What this is all really about is living our publicly espoused values in our own lives and with our own children’s futures. It’s one thing for a white parent to stand up at a “town hall” and decry the segregation and demand changes. It’s quite another for that same parent to send her child to a majority-minority school.

This is also about this constant “calculus”, as I call it above, that permeates how Brooklyn parents size up each other, day by day, interaction to interaction. Here in Brooklyn, we like judging others and their choices almost almost as much as we do fresh cheese from the co-op and sledding the hills of Prospect Park in the winter. The progressive purity test of our neighbors is one we never opt-out of.

I hope we can get a discussion going about how the different scenarios and dynamics are playing out in our schools, especially in district 13 and 15 middle schools, and crucially between the districts instead of looking at them as islands. Ultimately the patterns of school choice we see are the aggregate of lots of individual choices which after time form well-trodden paths. I think many of us want the same thing – schools with strong academics that foster character growth and social-emotional welfare within the context of diverse, integrated settings that draw on the melting pot of Brooklyn.

But how do we get there? Can we maintain a full choice model and integrate our schools at the same time? Should we focus on “quality” first? What does quality even mean? What does it mean to implement controlled choice models that purportedly will not constrain choice? How do we implement policies that encourage more students to stick around in districts like 13 for middle rather than policies that end up only driving more families away?

Have the districts outlived their usefulness and appropriateness as constraints of middle school choice, especially considering the arbitrary nature of many district lines in the context of Brooklyn in 2016?

Much of the DOE integration policy seems to assume voluntary integration by families like “A” and “B” above – is that realistic? Can integration happen without removing some form of choice?

How do we do the right thing, for both our own kids and the community at large?

North Slope Middle School Admissions Policy

A couple days ago I stumbled upon a Brownstoner post from May entitled “From the Forum: How Do I Navigate Park Slope’s Public Schools?” which was in fact a question about North Park Slope middle school admission policies specifically. I live in North Slope, have sent all three of my kids to the North Slope zone elementary school (PS 282) and have been through the middle school process with my oldest child. I’m also an elected member of CEC 13 and the Youth Chair of CB6 (which overlaps mostly with District 15, but overlaps D13 in North Slope).

Public middle school admissions in NYC are notoriously complex, but they are especially confusing in North Park Slope (by which I mean the District 13 portion of North Slope, which is the PS 282 zone as well, bound by – and please, look at the district map on NYC School Search – Union St between PPW and 6th Ave, 6th Ave, President from 6th to 3rd Ave, 3rd Ave to St. Mark’s, St. Mark’s to 4th Ave, 4th Ave to Bergen, Bergen to 5th Ave, 5th Ave to Flatbush and then Flatbush to Plaza and Union St at GAP).

I posted a response – albeit a few months after the original post. I am sharing it here as I hear a lot of questions from parents in the Slope and surrounding neighborhoods about this policy. It’s very confusing, but the headline is that if your LIVE in the 282 zone, your child is conferred District 15 (e.g., schools like 447, 88, 442, 51, etc.) middle school choice, even though technically North Slope is part of District 13. This is a function of where you live, not where your child goes to elementary school. 

Anyway, here’s my response which includes 4 examples that are the most common scenarios I hear about:

… (Y)ou should always go to schools.nyc.gov/schoolsearch/ first to check to see what the situation is with your home address and its elementary zone and overall district, which constrains (somewhat) 1) your child’s movement [choice] among elementary schools and 2) your child’s middle school choice.

First, know that middle school is a function of 1) the district of your child’s elementary school and 2) the district of your home address. In many cases these are the same, but when they are not, you will get access to BOTH sets of school[s] and the middle school form that will be generated by your child’s elementary school will include the union of the two districts’ middle schools.

Regarding the 282 zone [District 13 portion of North Slope], it is correct that the 282 zone (i.e., addresses in the 282 zone – a.k.a. “North Slope ”) “flips” to D15 for middle school. [Note Inside Schools also validates this – “Because of quirks in zoning, children who are zoned for PS 282, part of District 13, are eligible to attend District 15 middle schools and many take advantage of that option.”] 

That is, while PS 282 is a (D)istrict 13 elementary school, (home) addresses (i.e., student who live IN, not attend school AT) PS 282 zone will get District 15 choice.

Here are 4 scenarios:

1. You live in the 282 zone and your child goes to PS 282 for elementary school (my first hand experience): Your child will get D15 schools (e.g., 442, 447, 136, 51) because of your address AND will also get D13 schools (Arts & Letters, the Dock Street school, eventually the new middle at Atlantic Yards) because 282 itself is a D13 elementary. (D15 + D13 = D15/D13)

2. You live in the PS 9 zone [Prospect Heights] and your child goes to 282. Your child will ONLY get D13 choice because the PS 9 zone does not have this “quirky” rule [about D15] and 282 is a 13 school.  (D13 + D13 = D13)

3. You live in the 282 zone and your child attends 321 [not getting here into the how of how you pulled that off]. Your child will ONLY get D15 choice because you live in the 282 zone (which flips to D15) and PS 321 is a D15 school.  (D15 + D15 = D15)

4. You live in Prospect-Lefferts Gardens (D17) and your child attends 282. Your child will get access to both D13 and D17 middles (but NOT D15) (D13 + D17 = D13/D17)

[Adding four scenarios – three scenarios for PS 133 and one for PS 9…

5. You live in the 282 zone and send your kids to PS 133. Your child will get D15 middle school choice (because of your address) AND D13 choice (because PS 133 is technically a D13 elementary school, even with its dual-district elementary school admissions policy). (D15 + D13 = D15/D13)

6. You live in, say, South Slope – i.e., D15 – and you send your kids to PS 133. Your child will get D15 middle school choice AND D13 middle school choice. (D15 + D13 = D15/D13)

7. You live in, say, Clinton Hill – i.e., D13 – and your kids to PS 133. Your child will ONLY get D13 middle school choice (D15 + D13 = D15/D13).

8. You live in North Slope (i.e., the 282 zone) and send your kid to PS 9. Your child will get D15 choice from your home address and D13 choice because PS 9 is a D13 school. (D13 + D13 = D13)]

Note, 282 also has a middle school – MS 282. MS 282, along with MS 266 on Park Place and 6th in Park Slope [but temporarily relocated to the PS 93 building in Crown Heights for two more years while the building is refurbished], are D13 schools (despite being located in the D15 middle school choice catchment, to make it even more confusing) and adhere to the same policies as other D13 middles such as MS 113, Unisons, Fort Greene Prep, etc … with one additional wrinkle — MS 282, like MS 8 and Arts & Letters also in D13, give an additional admissions preference to “continuing on” students from their respective lower schools, as these 3 schools have, to some degree or another, a K-8 model.

I hope this helps clear up some of the confusion around this policy a bit (and note I’m not getting into out of district variances, which is a hole other matter). Again, this policy is something parents ask about a lot. While North Slope parents have lots of reasons for choosing other elementary schools than their zone school, 282 (or nearby 133, also in North Slope, which is de-zoned), I always cringe a little bit when someone who lives in North Slope cites D15 middle school choice as the only reason they did whatever they did to secure a D15 elementary school seat since their home address had already conferred to their child District 15 middle school choice regardless of the district of their child’s elementary school.