Cooperative Leaders and Scholars, Community Venues and Cultural Land Trusts

As an alum of the Cooperative Development Foundation’s Cooperative Leaders and Scholars program, I returned this year for their Denver coop tours trip and co-facilitated a personal asset mapping workshop for the new cohort. I had a lot of fun! We chatted with local coop leaders from the Rocky Mountain Farmer’s Union, Center for Community Wealth Building, CoBank, visited the Southwest Food Coalition, ate lunch from worker coop Mujeres Emprendedoras, and visited Montevista Mobile Home Park and the Queen City Coop.

I also met up afterwards with my coop collaborator Nathan to work on our Governance Archaeology project together (site to be live soon!) at CU Boulder.

Community Venues and Cultural Land Trusts

I’ve been thinking a lot lately about community venues and community spaces, leading me to think more about community real estate options. In NYC, so many DIY art, music, and community spaces closed over the pandemic. The ones around today are still constantly at threat of raised rents, insurance, etc. The main example I’ve seen on community-owned music venues is based in the UK, through the Music Venue Trust, where, using a Community Benefit Society—a not-for-profit legal structure that I believe is specific to the UK—local community members can buy community shares to raise capital to buy venue buildings. (Am hoping to publish some writing soon on community music models, including venues as well as platforms.) I’ve also heard of a DIY art space in DC, Rhizome, attempting to buy their. building using a community financing model based on Land in Common, a nonprofit community land trust in Maine.

On this trip, Jenna, one of the current CLS cohort members, told me about cultural land trusts, a community model for art and culture organizations to collectively steward land and buildings. There are actually a few examples of this, based in British Columbia; London; Austin; Seattle; and more.

I also learned on this trip about the NYC Real Estate Investment Cooperative (NYC REIC), which I was so excited to find and then, after looking around, seems now sadly inactive. I’m really curious how these efforts went and how we can learn from them.

Very excited to explore these topics more! I think this concept of community real estate is feeling salient to me as music spaces near and dear to me in NYC have been closing or at risk of closing. While more are forming as well, and I know that the nature of New York can often be ephemeral, I’m also thinking about ways for communities to be able to share homes together. I know at least three friend groups ready to pool in on a coop home together, and I’m wondering, what financial and legal infrastructures do we have, either on the state or federal level, to support these kinds of efforts of community and collective wealth building?

[Talk] Governable Spaces | Collective Governance: Governance Archaeology

Had a lovely time sharing a talk and panel earlier this month at the Brooklyn Public Library on Governable Spaces: Tech for Democratic Communities alongside colleagues and collaborators Nathan Schneider (Governable Spaces), Hazel Devjani (Metagov), and Rudy Fraser (Papertree).

Topics we collectively covered include:

  • governable spaces and implicit feudalism

  • financial infrastructure for mutual aid

  • toolkits for governance

For my portion, I also cover:

  • collective governance and commoning

    • examples: participatory budgeting, cooperative models, digital governance

  • governance archaeology — a collaborative project between myself, Federica Carugati, Nathan Schneider, and Júlia Rodrigues, investigating collective governance practices across history, geography, and culture (more to come soon!)

  • Indigenous data sovereignty


Watch the recording on the Internet Archive here:
>> Governable Spaces: Tech for Democratic Communities

[Essay] Privacy-Preserving Data Governance, Ash Center Occasional Papers Series

Published an essay with the Harvard Kennedy School Ash Center for Democratic Governance and Innovation on some recent research and ideas on privacy-preserving data governance, covering:

  • how emerging privacy-enhancing technologies (PETs) can serve vulnerable communities such as sex workers;

  • how privacy-preserving data collectives can enable community power;

  • how interfaces for data consent can create infrastructure for community agency;

  • models of access, usability, and responsibility over “ownership;” stewardship, consent, and agency over “control;”

  • community research and co-design.

decentralized networks for community care, dweb reflections, general updates

/ personal updates 

wanted to document and reflect some events from this past year, including:

  • quit my job at google this past summer

  • traveled and spent time with friends for a while 

  • explored zero-knowledge cryptography, privacy preserving identity, and secure multi-party computation 

  • biked, danced, and played in the desert dust at burning man

  • in theory, am a part of peer to peer residency – though i keep missing the sessions due to time zone differences … 

  • danced to excellent music nonstop at sustain release

  • started building analog modular synths with stem modular, helped table at the brooklyn synth expo

  • organized and live-coded visuals for a show! tender alchemy @ hex house 

  • still making music 

also attended a number of gathering where i met a lot of great people and made new friends! including: 


/ dweb camp reflections 

wanted to write a little bit on my experience at decentralized web camp! i was able to go thanks to the generous support of dweb and attended as a governance track fellow

as a fellow, i arrived a few days before camp and helped with build and setup – was really cool to see the community mesh network installed over a few days across the redwood forest! i think the early arrival also helped ease into the camp socially, as the rest of the crowd grew from a more intimate ~60 people to much bigger once dweb camp officially started. 

i felt like i benefited a lot from meeting and engaging with a lot of international attendees and was also pleasantly surprised at how large the latin american and indigenous presences were, making friends who were from and/or working with communities from berlin, argentina, brazil, india, the new zealand maori community, as well as the cohort of healing waters fellows who focused on indigenous water stewardship practices. 

the overall culture felt very casual and also pretty genuine. as opposed to more transactional, networking-type events, the camp felt like a low pressure, authentic gathering of cool, interesting, caring, and smart people who shared similar interests and wanted to meet, learn from, and build community with one another. 

the way the event was organized felt highly distributed in nature as well, with workshops and programming emergent from the attending communities. the sessions were distributed over a couple of days, covering governance models, distributed and decentralized networks, cooperative and community leadership, and included fun activities like stargazing and making music together. i also enjoyed the casual meal times where you could easily meet a new set of people to process, engage, or decompress with.

i co-facilitated a workshop on distributed networks for community care with zarinah agnew, someone whose work i really respect and admire. informed by previous experiences in community building, transformative justice, and governance methods, we held space for open discussions on different approaches to distributing community care for increased resilience and sustainability and led people through a social ecology mapping exercise for identifying and reflecting on their own care webs. 

it was exciting to see so much ongoing work in this space, both from community organizations that have been around for a while as well as from those newly emerging. my wish is to see more community collaboration, as i notice a lot of overlapping research, interests, and efforts, including between more formal institutions with more casual community groups. i’d also like to see increasing access and ease for new people to get involved. 

overall, i felt excited going into the camp and left feeling satisfied and inspired, with the biggest highlight being the friends made along the way (very cheesy). i really enjoyed my experience would definitely recommend it to anyone who’s interested in distributed technology and community networks <3 


/ what’s next?

currently, i’m back in new york and hoping to stay put for at least a little bit. i’m building hardware synths part-time and making music as i enjoy my sabbatical. i think nearing soon i’d like to take on more part-time or possibly full-time work, especially related to researching: coops and governance; cryptography; distributed systems; hardware for music :-) 

coops and governance: mood board

tldr: still deep in my coops + governance + web3 rabbit hole :-) also join our coops + governance discord!

have since joined a number of lovely web3 communities, including kernel, web3baddies, and crypto, culture, and society! had a really nice time at ethdenver and schelling point recently as well (special thanks to hudson jameson and 3box labs for making this trip possible) :-)

wanted to compile my current coops + governance + web3 mood board, a list of resources and topics that have been simmering for a while. would love to dig deeper, research, write, and collaborate with others on these topics + hoping to maybe go through a round of applying for grants to formalize some of these explorations, conduct interviews, facilitate conversations, and co-create resources. if you’re interested in any of these, please contact me, i do read my messages!

special thanks to friends and collaborators kadallah burrowes, em, and bert muthalaly for their feedback and conversations <3

also — just created a coops + governance discord server (join us!) :-) originally was a big signal chat of friends and am hoping to build a more accessible community and open it up, would love to have more ppl share ideas and resources and create conversations with!

word cloud of primary interests rn: coops, governance, solidarity economy, collective ownership, public goods and the commons, coalition building, p2p models, distributed networks, systems thinking, interdependence
+ secondary interests: privacy, zero knowledge proofs, cryptography


coops + governance mood board 

// coops


// governance 

  • polis

  • quadratic voting

  • arweave profit sharing communities tokens / voting

    • voting power = ownership * commitment (staking) time

  • dynamic participation ?!

    • interested exploring in governance models that allow for dynamic and fluid frameworks of participation, e.g. different people can have varying levels of involvement over time 

// profit sharing, local currencies, funding models

// mechanism design

  • demurrage - small charge on holding of money, in order to encourage some exchange, not inflation

// public goods, the commons 

// universal basic income (ubi) / universal basic services (ubs); debt jubilees

  • tradeoffs: legibility, proof of personhood, accessibility, quality of services  

    • universal basic income

      • pros: those who are less legible to institutions may have more autonomy for support

        • e.g. trans or poc access to hormones may not be supported by traditional healthcare systems

      • cons: those who are in debt, ubi just goes towards debt. basic needs may not be met for those who exist at a deficit or those who are working against the system

    • universal basic services

      • pros: ensures basic needs are met. e.g. healthcare, education, housing, legal services, etc. 

      • cons: less autonomy to what an individual may need. institutions may not provide adequate care for marginalized communities

  • proof of personhood, sybil resistance

  • emerging ubi systems

  • other experiments in ubi

  • debt collective: debt jubilees, debt abolition, rolling jubilee fund 

// orgs

// books 

// resources 

coops and governance

gave a very casual talk this week at a month-long retreat i’ve been at in the adirondack mountains :-) my friend wes and i hosted a “crypto double feature” where i talked about distributed coops and they covered rsa cryptography.

have been thinking a lot about how crypto and blockchain tech can be used for community building and organizing, specifically for facilitating the distribution of wealth, supporting creative and activist communities, and redefining the way we govern social systems and resource sharing.

particular topics of interest include: distributed coops (discos), universal basic income in blockchain networks (who watches the watchmen?), commons-based peer production (cbpp), and cities and governance.

here are the slides below:

a very huge thank you to my friend kadallah for introducing me to discos (distributed cooperative organizations) and for all their deep insights and inspirations and our many conversations on distributed communities and worker-coops for artists and activists!

special shoutouts as well to friends ryan, bert, wes, ember, and joe for being sounding boards throughout this month as i talk endlessly on this topic :-)

on a more technical note — i’ve been reading more recently on zero knowledge proofs and zksnarks/zkstarks. thank you to ember for their casual talk + discussion on zkp’s (my notes here)!

hypnopompia -- published fiction story w/ kernel mag

hypnopompia

— a story about dreams, brainwaves, music, and relationships.

i recently published a speculative fiction story with kernel magazine! i wrote most of the story during a writing retreat with the reboot writer studio in asheville, north carolina.

many thanks to saffron huang, jamie wang, jasmine sun, andrew yoon, and many more in the community + friends for all the support and feedback. read it online here or buy the print mag here :-)

sleep, dreams, and brain waves

gave a casual talk on sleep, dreams, and brain waves! to a qualia interest group through interact. some topics covered:

  • brain wave frequencies and brain states

    • delta, theta, alpha, smr, beta, gamma

  • sleep and dreams

    • sleep stages, rem sleep, sleep and eeg

    • liminal dream states: hypnagogia, hypnopompia, sleep paralysis

    • theories of dreaming

    • lucid dreaming

    • oneirogens, aka dream herbs

  • neurofeedback

    • history and applications

  • binaural beats

    • hearing theory, frequency theory

    • brain wave entrainment

    • solfeggio tones

  • visual reconstruction of dreams

here are the slides:

i’ve been on a real neuroscience + neurofeedback + brain waves kick this year and have been really interested in getting into dream research! i read dream researcher allan hobson’s dream life: an experimental memoir and currently reading jennifer dumpert’s liminal dreaming: exploring consciousness at the edges of sleep. i also wrote a speculative fiction short story earlier this year exploring dream therapy, neurofeedback, binaural beats, and music, to be published later this summer :-) excited to share it once it’s out!

i’ve been particularly interested in:

  • neurofeedback + eeg + music

  • therapeutic applications of lucid dreaming

  • dream generation and reconstruction

  • autoencoders for brain states and dreams

if any of these topics interest you, please get in touch! i would love to collaborate or chat more about any of these explorations :-)

Localhost Talk: creative applications of deep learning, aka, neural networks for fun and not profit :-)

Earlier this week I gave a talk at Localhost, the Recurse Center’s public-facing technical speaker series. Slides embedded below. Here’s also a link to the talk slides if you want to see my notes included.

The talk covers some of the creative deep learning projects I’ve worked on while at RC:

Overall I received a lot of enthusiastic positive feedback and felt pretty good about how it went! I do feel somewhat proud of all of the fun projects I was able to explore while at RC, and it feels nice to be able to share that with others.

Implementing char-RNN from Scratch in PyTorch, and Generating Fake Book Titles

This week, I implemented a character-level recurrent neural network (or char-rnn for short) in PyTorch, and used it to generate fake book titles. The code, training data, and pre-trained models can be found on my GitHub repo.

 
Heart in the Dark
Me the Bean
Be the Life
Yours
 

Model Overview

 
Diagram of the char-rnn network architecture. Source.

Diagram of the char-rnn network architecture. Source.

 

The char-rnn language model is a recurrent neural network that makes predictions on the character level. In contrast, many language models operate on the word level.

Making character-level predictions can be a bit more chaotic, but might be better for making up fake words (e.g. Harry Potter spells, band names, fake slang, fake cities, fantasy terms, etc.). Word-level language models might have an advantage for generating longer pieces of text, like summaries or fiction, as they don’t need to figure out how to spell, in a sense.

There do exist character-word hybrid approaches. For example, the GPT-2 model uses byte pair encoding, an approach that interpolates between the word-level for common sequences and the character-level for rare sequences.

This particular char-rnn implementation is set up to handle multiple categories of text. In this use case, it is able to make predictions for different book genres, e.g. Romance, Fantasy, Young Adult, etc.

Training Data

The training data used for this model is a modified version of a Goodreads data scrape of 20K book titles. I transformed the CSV file into separate text files for the top 30 genres. The resulting split dataset can be found in my Github repo.

GPU training time with this model took about 20 minutes on an NVIDIA GeForce GTX 1080 Ti. Generating samples only takes a few seconds.

Results

The following results are a selected sampling of outputs. Note that I’m mainly including examples that consist of real words, with a few exceptions.

Romance

Heart in the Dark
Years of the Dark
You the Book
The Stove to the Story

Fantasy

Growing the Dark
Book of the Dark
Red Sande

Fiction

In the Bead Store
Jen the Bead
King the Bean

Historical

A to the Bean
Other and Story

Science Fiction

Darke Sers
Voringe
In the Beantire

Mystery

Bed Singe
Kiss of the Dark
Red Story

Classics

A Mander of the Suckers
Gorden the Story of Merica

Childrens

Dark Book of the Story of the Sures of the Surating
Late
Story of the Bean

Paranormal

A Store of the Store
Red Store
Stariss and Storiss
Wind Store

New Adult

Live Me Life
Growing Me
In the Bean
Me the Bean

Poetry

Yours
Me

Erotica

Volle the Story of Men
King of the Dark
Dork of the Dark
Work of the Dark
Bed Storys of the Dark
Your Mind

Biography

Be the Life
On Anger and Of Mand Anger

Comically, there are many book titles that revolve around beans, beads, stores, and darkness. While I did notice some subtle differences between genres, it doesn’t appear to be particularly drastic overall.

joke2punchline, punchline2joke: Using a Seq2Seq Neural Network to "Translate" Between Jokes and Punchlines

 
> what do you call an unpredictable chef ?
< ouch .
 

After implementing the seq2seq model, an encoder-decoder network with attention, I wanted to get it to translate between jokes and punchlines. The scripts, pre-trained models, and training data can be found on my GitHub repo.

Model Overview

The underlying model is a PyTorch implementation of the Sequence to Sequence model network, an encoder-decoder network with an attention mechanism. Seq2seq can translate any arbitrary text sequence to any arbitrary text sequence. A more useful application, for example, would be translating English to French or vice versa. For this project, I trained the seq2seq model on question-answer format jokes, so that it can output a punchline given a joke, or output a joke given a punchline.

Results

Overall, the results were somewhat nonsensical, as one might expect. These results are curated by me based on whether or not they made me, at minimum, smile. Yes, I do laugh at my own jokes.

For the following examples, > represents the text input, < represents the model output. I’ve selected examples where the joke or punchline is not directly from the training set, i.e. excluding any output that is simply being regurgitated from the original dataset.

Joke2Punchline

For the following examples, the first line is a fake joke I wrote up using words within the model’s joke vocabulary and fed into the model (>), and the second line is the punchline outputted by the model (<).

 
> what do you call an unpredictable chef ?
< ouch .

> what do you call a pile of pillowcases ?
< screw music

> why was the sun hospitalized ?
< because he was sitting on me .

> what do you call an unhappy star ?
< stay here !

> what do you call an unhappy star ?
< days numbered !

> what is a strawberry s favorite month ?
< a cat !

> who s there ?
< in the dictionary .

> what is red and bad for your teeth ?
< a a gummy bear

> what treasure can you find on a thursday ?
< the lettuce !

> when is a sun a moon ?
< a barber driver

> how many bananas can the moon eat ?
< fish and the train .

> what do you call an upside down egg ?
< the dough

> why was the sun unhappy ?
< because he wanted to tearable time paste !

> what did the skeleton say when they died the wrong year ?
< it march

> how many snails does it take to get to the moon ?
< to the hot hot shakespeare !

> why was the moon crying ?
< because he was on the deck !

> where do sheep go to school ?
< they take the mile bison of course !

> how many emotions does the sun have ?
< he got cents
 

Punchline2Joke

For the following examples, I fed the model fake punchlines, written using words within the model’s punchline vocabulary, and the model outputted a joke that would result in the input punchline. The first line is the fake punchline I fed into the model (>), and the second line is the joke outputted by the model (<).

 
> two parents
< what has four wheels and flies over the world ?

> watermelon concentrate
< when do you stop at green and go at the orange juice factory ?

> cool space
< what do you call an alligator in a vest with a scoop of ice cream ?

> meteor milk
< what do you call a cow that is crossing ?

> one two three four
< what did the buffalo say to the bartender ?

> jalapeno ketchup
< what do you call a boy with no socks on ?

> ice cream salad !
< what did the fish say to the younger chimney ?

> the impossible !
< what did the worker say when he swam into the wall ?

> both !
< what do you call a ghosts mom and dad ?

> pasta party
< what do you call the sound a dog makes ?

> salad party
< what did the buffalo say to the patella ?

> dreams party
< what do you call the sound with a fever ?

> a thesaurus and a dictionary
< what kind of shorts do all spies wear ?

 

Considerations

Training Data

To train the model, I needed a dataset of clean jokes in question-answer text format.

While I did find a dataset of question-answer format jokes, the jokes are scraped from Reddit’s r/jokes subreddit. Going through the file, I did not like most of the jokes at all, as most of them were highly problematic. They were often racist, sexist, queerphobic, etc., and I would rather compile my own than to feed bad data into my model.

One option would be to filter this dataset using a set of “bad” keywords, but trying to filter a heavily biased dataset was less appealing to me than to create a new set entirely. An alternative could be to write a scraper for r/cleanjokes, filtering in only question-answer format jokes, but I didn’t want to invest too much time/energy on this toy project, and I personally am not a fan of using Reddit for training data in general.

I ended up compiling my own small dataset of clean jokes in the question-answer format, consisting of a little over 500 jokes total. A major trade-off was that the model’s vocabulary is relatively limited, but I enjoyed the jokes much more and felt much better about the data I was feeding into the model.

Teacher Forcing

For the joke2punchline and punchline2joke models, the teacher forcing ratio was set to 0.5. I’d be curious to adjust this parameter and see the results. I would expect a lower ratio to result in more nonsensical output, whereas a higher ratio would likely result in more outputs that are directly from the training set.

I think an ideal setup would be to lower the teacher forcing ratio in addition to having a much larger training set.

Possible Extensions

I do think it would be fun to generate jokes and punchlines using an RNN or LSTM before feeding it into these models, such that there is less human intervention (i.e. writing fake jokes/punchlines manually).

I also think the model would be way more fun to play with if it I could train it with a much larger dataset, i.e. 10K+ jokes.