Week 2: Extrusions

Our assignment for Week 2 was to create a new extrusion in Fusion 360 by finding a dimensioned drawing of an extrusion, creating a sketch of the cross-section, and extruding the sketch to create a 3D model.

Dimensioned Drawing

This was the final dimensioned drawing I chose to work with in Fusion 360: a 40x20 aluminum extrusion.

My Extrusion

I did find it helpful that this 40x20 extrusion drawing was symmetrical and most of the dimensions were even numbers, and it was beneficial to have a sheet of paper and a pen to make additional calculations and jot down numbers that were not marked on the drawing. I started my sketch in Fusion 360 with a combination of construction lines and rectangles on the left side before replicating it on the right side. This assignment was a great opportunity to become more familiar with the basic sketch tools while realizing there is still more I want to learn about how to best approach creating a drawing or making my sketch more precise.

The sketch in progress.

The completed sketch ready for extrusion.

The final 3D model of the extrusion.

Introduction to Computational Media: Week 10

This past week in ICM, we focused on exploring sound in p5. Our assignment was to create a sketch that analyzes sound (live or recorded audio) and translates it into something visual to represent the sound. I wanted to experiment with visualizing the sound of popping popcorn. For this initial version of the sketch, I used an image of a single popcorn that changed in size according to amplitude. I would like to keep playing around with manipulating this image and generate more popcorn as the sound plays.

An animated sample of the popcorn sound sketch.

An animated sample of the popcorn sound sketch.

Video and Sound: Final Project

Once A Little Mermaid

For our final project in Video and Sound, Jenny Wang and I collaborated on creating a reinterpretation of The Little Mermaid in the form of a social media video campaign.

Process

We expanded on the previous storyboard that Jenny sketched out with more details about our version of The Little Mermaid and the character of Ariel. We decided that Ariel lived in New York City and had her own social media account, and she would tell the story of her former mermaid life and why she traded her voice to be human through an introductory video post.

To develop the video, we came up with five parts we wanted the narrative to include: 1) introduction, 2) Ariel's past mermaid life, 3) what changed and what she found, 4) her reasons for leaving home and becoming a human, and 5) her message and mission. From there, we wrote a voiceover narration and identified what video/audio assets we needed for each section.

With Adobe Premiere Pro as our editing software, we used a combination of footage that Jenny sourced along with video I filmed with my iPhone in Long Island City. We also both watched the Netflix documentary, Chasing Coral, which became a point of inspiration for this project. The Ocean Agency, whose work on documenting coral bleaching was featured in this film, had a coral reef image bank with video footage healthy and bleaching coral reefs available that we were able to use. I initially recorded the narration just to get a sense of timing for the video, but we ended up using it as the voiceover for the final version.

Social Media Campaign

To give Ariel a voice, we chose Instagram as her platform and created an account with the handle @once.a.little.mermaid. The graphic for her profile photo was inspired by one of the poster illustrations for Disney’s animated version of The Little Mermaid. True to our ocean activist character of Ariel, the accounts she follows and engages with are primarily organizations related to ocean and marine life conservation.

A screenshot of the profile page for our character, Ariel Tritonsdatter, on Instagram as @once.a.little.mermaid.

A screenshot of the profile page for our character, Ariel Tritonsdatter, on Instagram as @once.a.little.mermaid.

Instagram Posts

The initial draft of our introductory video was just under two minutes in length, but after reviewing it and receiving feedback in class, we shortened a few pauses and split the video into two parts for posting. (We also considered the time limits for video posts, although another option could have been posting the full-length video as an IGTV post — but there was the short attention span of social media to keep in mind.)

Introductory Video: Part I

Introductory Video: Part II

Additional Posts

To follow up the call to action from the last video, we planned on having the subsequent posts raise awareness about specific issues of ocean pollution, such as the impact of ghost gear and plastic on marine life, and highlight actionable steps that humans can take in their everyday lives to help protect the ocean.

Final Result

Soon after making the Instagram account, we picked up some followers and likes on the posts, and we also received positive feedback on this concept and the reimagining of Ariel as a former mermaid on a mission to save the seas.

While Jenny and I had not initially planned on continuing the project beyond this class, there are still many more potential stories and content that could be produced and shared through this account. We both care about ocean conservation and environmental issues, and Jenny compiled excellent research as part of this project. Some ideas we had for future posts included creating short videos and covering issues such as coral bleaching from climate change, the impact of COVID-19 on the plastic pollution problem, and ocean acidification. Stay tuned!

Introduction to Computational Media: Week 6

Assignment

For week six, we were asked to build on the assignment from the previous week, such as adding additional shapes and controls, and to use arrays where appropriate.

Sketch

Building on the previous Create Your Own Wall Drawing sketch, I wanted to add an option for switching between a range of geometric shapes, again in reference to the original Wall Drawing 340. I created a fifth slider to choose a shape with one of three values: 1) circle, 2) rectangle, and 3) triangle. If I can keep going with further controls, I would like to add either a slider or a mouse press to toggle between the primary colors (red, yellow, blue) as well.

View the sketch in the p5.js Editor.

Introduction to Computational Media: Week 5

Assignment

The assignment for week five was to draw a complex design that uses multiple shapes with different parameters: 1) have at least two shapes move independently on the screen, 2) use DOM UI elements, slider to control some of the shape parameters, 3) reorganize "groups of variables" into objects.

Sketch: Create Your Own Wall Drawing

For this assignment, I continued to take inspiration from Sol LeWitt’s work, particularly Wall Drawing 340, which layered primary colors of geometric shapes and lines with crayon on similarly red, yellow, and blue walls, and made a sketch that would allow you to create your own Wall Drawing.

Wall Drawing 340 by Sol LeWitt at the MASS MoCA.

Wall Drawing 340 by Sol LeWitt at the MASS MoCA.

I recreated one of the blue wall drawings, but with a circle shape, and added a few sliders that could adjust some parameters as desired: the fill color of the circle, the size of the circle, the number of lines, and the color of the lines. My currently outstanding challenge with this particular sketch code is not knowing exactly how I can and should reorganize variables into objects.

You can try the sliders in the embedded sketch or view the sketch in the p5.js Editor.

Introduction to Computational Media: Week 4

Assignment

This week’s assignment: make something with a lot of repetition.

Sketch

We were provided were a few sources of inspiration for creating a sketch with repetition, including the large-scale wall drawings by Sol LeWitt. I had seen some of LeWitt’s drawings at Dia:Beacon a few years ago, and I remember being amazed by the incredible work creating so many graphite lines. I wanted to try my hand at practicing creating similar repetition with code. View the sketch in the p5.js Editor.

Introduction to Computational Media: Week 3

Assignment

Our week three assignment was to work with rule-based animation, motion, and interaction: 1) create or expand on a sketch with animated elements, 2) select one or more of the interface elements presented in class (such as rollovers, switches, sliders, or checkboxes), and 3) tie everything together and have the interface element(s) control the visual design or behavior of other elements in your animated sketch.

Sketch

For this assignment, I created a sketch with three sliders that controlled the visual design and animation on the canvas, which is a rotating square. The slider controls are as follows: 1) change the background color from black to white, 2) change the rotating object fill from black to white, and 3) change the speed of the rotating object from slower (starting at no movement) to faster. View the sketch in the p5.js Editor.

ICM_Week_3.png

Video and Sound: Week 3

Reinterpreting The Little Mermaid

For the final video assignment, Jenny Wang and I will be working together to reinterpret a classic fairy or folk tale in the format of a “viral” video campaign filmed on our mobile devices, and edited and put on a social media platform. We decided to go with The Little Mermaid and wanted to change the narrative: instead of the main character dreaming of living in the human world to be with a prince, she trades in her life as a mermaid for a new one on land in pursuit of a greater mission of protecting the ocean.

Synopsis

In this reinterpretation, the Little Mermaid is on a campaign to save the ocean. In her home under the sea, she has been encountering signs of pollution and seeing plastic objects floating in the water and washing up ashore. She collects and follows these items to find out where they are coming from, and discovers the reality of the world above the water. In order to share her discoveries, she makes a deal with a sea witch: she trades in her voice to be human and leave her underwater home. Although now voiceless, through visual imagery and sounds of videos shared on her social media account, she shows how underwater life has been changed by pollution and how her home has become unlivable over time. The Little Mermaid asks humans to help save her home, take action to protect the ocean, and be aware of their impact on the environment. 

Storyboard

LittleMermaidStoryboard.png

Moodboard

Moodboard.png

Introduction to Computational Media: Week 2

Assignment

The assignment following our second class was to create a sketch that includes these three elements: 1) one element controlled by the mouse, 2) one element that changes over time, independently of the mouse, and 3) one element that is different every time you run the sketch.

Sketch: Circles vs. Squares

I made a simple sketch that continuously draws randomly placed circles throughout the canvas, filled with a random color range of dark gray to black (the elements that are different every time you run the sketch, and changes over time, independently of the mouse). When you click anywhere on the canvas, you draw a white square in the same width as the circle. View the sketch in the p5.js Editor.

Screenshot of Circles vs. Squares sketch in p5.js Editor.

Video and Sound: Week 2

For week two, we were tasked with collecting sounds and working with a partner to create a virtual sound walk in Unity. When my partner Nick Parisi and I first met to discuss our initial ideas and draft a storyboard, we discovered that we both went to Governors Island over the previous weekend, and decided to build our arc on how the island provides a quick escape from the hustle and bustle of New York City, being only a short ferry ride away from Manhattan or Brooklyn.

The walk begins at the ferry landing, with the sounds of the boat engine, an announcement blaring from the intercom, and the waves hitting the shore. From there, you venture through the island on a footpath and encounter gulls squawking, bike bells ringing, children playing, and a skateboard passing by. After finally making it up to the top of a set of hills off the beaten path, you at last find a place of serenity and respite from the noise: a bench to sit, breathe deeply, and take in the present moment.

Nick impressively took on the challenges with the build in Unity, while I worked on collecting and editing the sounds for the virtual walk. We used some of the original recordings I had done for the purpose of the sound collage assignment and supplemented with sounds sourced from the Freesound database. We decided to have the sound of gulls and the water of New York harbor remain consistent throughout the island, while audio elements like hearing people in a conversation or the children on a playground would become louder as you approached closer. The walking surfaces (the main footpath versus grass) also triggered different footstep sounds.

The virtual sound walk can be experienced at i.simmer.io/@Sleepnir/governorsislandsoundwalk-final. To navigate, click in and use arrow keys or W, A, S, D letters; to exit, hit the ESC key.

Video and Sound: Week 1

The Danger of a Single Story

“When we reject the single story, when we realize that there is never a single story about any place, we regain a kind of paradise.” –Chimamanda Ngozi Adichie

There is power in the stories we hear and tell. Chimamanda Ngozi Adichie touches upon this idea in her TED Talk: “Stories have been used to dispossess and to malign, but stories can also be used to empower and to humanize.” I appreciated her message and this particular reminder that as a storyteller, there’s an importance in recognizing that stories can be dangerously definitive when there’s no singular perspective or narrative about people and places. There might be voices or narratives unheard and untold. We might be buying into the single stories we have consumed, and already have our own assumptions and biases. In my personal life and in my creative and professional work, my takeaway is that I hope to maintain this kind of consciousness about storytelling, especially in the stories I choose to tell.

Sound Collage

This past Sunday afternoon, I took my folding bike out for a ride from Brooklyn, onto a ferry, and around Governors Island, and recorded some sounds along the way. This is a sound collage of my afternoon adventure outside.

Final Project: Steampunk Coffee Machine

For our final project in Physical Computing, Erkin SalmorbekovSammy Sords, and I made the Steampunk Coffee Machine: a re-imagined coffeemaker that creates an interactive brewing experience by guiding its user in making their perfect cup.

Ideation

We began brainstorming our final project with two things in mind: 1) we had a shared interest in coffee, and 2) we liked the concept of having a user interact with some sort of steampunk-inspired lever mechanism. We also discussed how coffee made us think about ritual, warmth, individual preferences, different types of beans and flavor profiles, and coffee-making as a performance art. These thoughts converged into the idea of creating a device that could calculate the amount of coffee and water needed for a perfect brew based on a user's preferences (such as their desired coffee strength). We also wanted the machine to have some theatrical elements and a steampunk aesthetic, and incorporate brass/copper, Edison bulbs, pipes, knobs, valves, and perhaps even steam itself in the design.

This was a sketch from November 20, 2019 outlining the overall concept and possible features of the machine. We listed some questions we could ask the user and the steps they would take to complete the brewing process.

This was a sketch from November 20, 2019 outlining the overall concept and possible features of the machine. We listed some questions we could ask the user and the steps they would take to complete the brewing process.

Concept

The interaction begins with the user selecting their answers to three questions about their current state and coffee preferences:

  1. How are you doing today? The response options are three emojis: great, meh, or tired. The user’s selection will correspond to the baseline amount of coffee needed. Someone doing great will be given more grams of coffee versus someone who is not.

  2. How strong would you like your coffee? Response options: Subtle, balanced, or bold. Selecting “subtle” lowers the number of grams whereas “bold” increases it.

  3. Which roast profile do you prefer? Response options: light, medium, or dark. The user’s preferred roast profile is matched to a specific bag of beans.

Based on the values of the user's selections, the machine would display which coffee beans to use and how many grams of coffee to measure for a perfect cup (in a 10 oz. mug), which were calculated according to coffee to water ratios ranging from 1:14 to 1:17.  For the class presentation, the coffee bean options were three roasts from La Colombe: Mexico, a light roast; Haiti, a medium roast; and Monaco, a dark roast.

After the machine displays the user's coffee type and number of grams, they scoop out the grounds until the weight on the scale matches the recommended amount. From there, they pour water into the bucket, place their coffee grounds into a paper filter sitting inside the brass filter basket, pull the lever down – and voilà! The steampunk coffee machine begins to heat the water and drip it right above the filter, preparing a cup of coffee for the user's enjoyment.

System Diagram

A diagram of the various components of the machine and how they were all connected together.

Code

The code for the scale and the potentiometers can be viewed at github.com/afaelnar/steampunk_coffee_machine.

Photos

Steampunk+Coffee+Machine+1.jpg

Midterm Project: Song of Swords

Concept

For the midterm project, the goal was to create a simple interactive system with physical controls. I partnered with Sammy Sords, who also works as stage combatant and teaches stage fighting with broadswords. We pursued the idea of pairing a broadsword with the Arduino and a speaker to teach sword fight parry positions and pacing. The tone would change upon the orientation or clash of the sword, and altogether, the movements would generate a song.

Process

We made use of the Arduino Nano 33 IoT’s built-in LSM6DS3 inertial motion unit (IMU), which is a 3-axis accelerometer and 3-axis gyroscope. We installed the Arduino LSM6DS3 library to read the values for acceleration and rotation and the Madgwick library to further determine the heading, pitch, and roll for the parry positions.

The code was based on the example provided for Determining Orientation. Sammy mounted the Arduino onto the sword and defined the orientation for four different parry positions (1st, 2nd, 3rd, and 4th) based on the serial monitor readings, and then we assigned a tone frequency to each position.

While we planned to also generate sound when the sword clashed with another sword, we weren’t able to resolve the issue of the gyroscope bouncing on impact, and so it was not part of the final result.

#include <Arduino_LSM6DS3.h>
#include <MadgwickAHRS.h>
 
// initialize a Madgwick filter:
Madgwick filter;
// sensor's sample rate is fixed at 104 Hz:
const float sensorRate = 104.00;
 
void setup() {
  Serial.begin(9600);
  // attempt to start the IMU:
  if (!IMU.begin()) {
    Serial.println("Failed to initialize IMU");
    // stop here if you can't access the IMU:
    while (true);
  }
  // start the filter to run at the sample rate:
  filter.begin(sensorRate);
}
 
void loop() {
  // values for acceleration & rotation:
  float xAcc, yAcc, zAcc;
  float xGyro, yGyro, zGyro;
   
  // values for orientation:
  float roll, pitch, heading;
  // check if the IMU is ready to read:
  if (IMU.accelerationAvailable() &&
      IMU.gyroscopeAvailable()) {
    // read accelerometer & gyrometer:
    IMU.readAcceleration(xAcc, yAcc, zAcc);
    IMU.readGyroscope(xGyro, yGyro, zGyro);
     
    // update the filter, which computes orientation:
    filter.updateIMU(xGyro, yGyro, zGyro, xAcc, yAcc, zAcc);
 
    // print the heading, pitch and roll
    roll = filter.getRoll();
    pitch = filter.getPitch();
    heading = filter.getYaw();
    Serial.print("Orientation: ");
    Serial.print(heading);
    Serial.print(" ");
    Serial.print(pitch);
    Serial.print(" ");
    Serial.println(roll);

      // play tone for parry position -- 1st
      if (xGyro > 80 && pitch <= 2 && roll >= 5) {
      tone(8, 523.25, 1000);
          delay(100);}

      // play tone for parry position -- 2nd
      else if (xGyro > 80 && pitch <=2 && roll < 5) {
      tone(8, 587.33, 1000);
          delay(100);}

      // play tone for parry position -- 3rd
      else if (xGyro > 80 && pitch > 2 && roll >= -5) {
      tone(8, 659.25, 1000);
          delay(100);}
          
      // play tone for parry position -- 4th
      else if (xGyro > 80 &&pitch > 2 && roll < -5) {
      tone(8,698.46,1000);
          delay(100);}
      }
}

Result

Sammy demonstrating the parry positions and the change in tone output with each position.

For the demonstration, Sammy attached the speaker to the glove of hand bearing the sword and powered the Arduino with a portable USB charger. The volume of the speaker ended up being too quiet during the in-class demo (which could be addressed in the future), but the tone output can be heard in the video above and fulfills our goal of providing sound feedback to the user.

Labs: Servo Motor Control and Tone Output

Following up our week four class, I revisited the tone output lab to practice using Arduino functions with the speaker. I wasn’t able to get my speaker to function last week (not sure if it was my circuit or my code), and I am still trying to get a grasp of the programming and how tones are generated.

The first step for me was soldering wires onto the speaker to make it easier to connect to the breadboard. The soldering tutorial in class was helpful as I felt much more confident with using the iron this time around (certainly noticed an improvement in technique since the first attempts to solder wires a couple of weeks ago for the switch lab).

The solder station set-up: a soldering iron, a fume extractor, brass wool for cleaning the solder tip, and helping hands holding up the speaker.

The solder station set-up: a soldering iron, a fume extractor, brass wool for cleaning the solder tip, and helping hands holding up the speaker.

After I got my speaker up and running, I wanted to more specifically test out tone melody output. I discovered that programming is currently my weakness.

I had encountered the codes for generating the notes of some recognizable songs, so I first attempted setting up three pushbuttons to each play a different song snippet. While each pushbutton did play a different melody as assigned, I had a couple of issues. First, the song would be cut short (only several notes would play), and second, I would have to re-upload the program again each time I wanted a button to play.

In my second attempt, I had two pushbuttons set up to play two Super Mario Bros. theme songs. The pushbuttons were playing the correct melody at first, and this time, the songs played in full and the pushbuttons worked continuously. However, I wanted to be able to switch between the two melodies depending on the button, but I couldn’t make it work.

In both cases, I’m wondering what I’m fundamentally missing with the code.

Two pushbuttons intended to play two different Super Mario Bros. theme songs with the Arduino: 1) the original melody, and 2) the underworld melody. The buttons play the assigned melody when initially pressed, but they don’t actually function to switch between the songs. Holding either of the pushbuttons down will continue playing the current song finish before going into the other song.

Observation and Labs: Arduino Digital and Analog

Labs

This past week, we focused on learning how to create circuits and write code for digital input/output and analog input with a microcontroller.

Lab two reviewed adding digital input (a pushbutton), adding digital outputs (LEDs), setting up the Arduino IDE, and programming the Arduino to set the inputs and outputs. Lab three demonstrated connecting analog input (a force-sensing resistor and a potentiometer), and utilizing the serial monitor to detect the range and state of the sensor (such as the level of force applied to the sensor).

While the labs enabled me to get a basic understanding of the digital input/output and analog input pins on the Arduino, I found the programming component more challenging, especially writing the code from scratch. I experimented with making minor tweaks to the code of the programs in the labs, such as making the LEDs blink with the push button, or changing the blinking speed of the LED.

I didn't feel confident enough yet to write my own code for the creative exercises, but one thing I wanted to learn was how to make one single pushbutton control two LEDs. I encountered a thread on the Arduino forum that discussed how one might be able to do this and tried out the code myself with the same circuit created in lab two.

The first press of the push button turned the red LED on, the second press turned the red LED off and the blue LED on, the third press turned both LEDs on, and the fourth press turned both LEDs off.

One push button programmed to control two LEDs.

Observation

In addition to completing the labs, we were asked to observe a piece of interactive technology in public that is used by multiple people. I observed the self-checkout machines for making purchases at a CVS pharmacy store.

My assumption for its use is that the machines provide shoppers with an expedited process: they give the shopper the ability to avoid the (sometimes long) line for the cashier and pay for and bag their purchases. The store can have multiple machines available for use and would only need someone to monitor them and troubleshoot any issues that arise with the machine or checkout process. By employing one cashier and one person to oversee the express checkout, the store would also not have to fully staff all of the registers.

When I entered the store, I counted four self-checkout machines, and two were in use. I noticed that other customers who came in at the same time quickly picked up a few items and then immediately went to the express checkout area. The machines have a clear purpose to customers: people knew what to do and were in and out of the store in only a couple of minutes.

One noticeable aspect of the self-checkout machines is the robotic voice that instructs the shopper through the process, which is audible even while in other parts of the store. As a person approaches, the machine senses their presence and says, "Welcome, please select your language. To start, simply begin scanning your items and follow the system prompts." There are two screens: 1) a large touchscreen that displays the scanned items and a selection of buttons for inputting a CVS ExtraCare Card number, choosing a form of payment, and scanning a coupon; 2) a credit card terminal that also provides a separate set of card-specific instructions for the transaction on a smaller screen.

I have personally used the express checkout enough times to get through the process without having to wait for the voice instructions, and know that there is a necessary step of selecting the payment type on the main touchscreen to process a credit card inserted in the terminal. (Before I knew this, I have stood at the machine waiting and not understanding why my payment wasn't being processed.) However, with the multiple screens, buttons, and instructions, some customers are not so familiar with the system and take more time to listen for guidance.

I also observed that the machines do not always operate so perfectly that a shopper would not need to request assistance from an employee. At one point, the voice from a machine says, "You have activated our inventory control system. Please see a cashier to complete your transaction." If a coupon deposited into a slot does not register as received, or the weight of an item that was meant to be placed in a bag is not detected, the machine indicates there is an issue with the transaction.

Observing these express checkout devices is a reminder of the limitations of machines and the importance of thoughtful design in addressing the frustrations that arise from such flaws. In The Psychopathology of Everyday Things, Don Norman wrote, "Machines usually follow rather simple, rigid rules of behavior. If we get the rules wrong even slightly, the machine does what it is told, no matter how insensible and illogical." However, Norman also noted, "Designers need to focus their attention on the cases where things go wrong, not just on when things work as planned. Actually, this is where the most satisfaction can arise: when something goes wrong but the machine highlights the problems, then the person understands the issue, takes the proper actions, and the problem is solved. When this happens smoothly, the collaboration of person and device feels wonderful."

In the case of the CVS self-checkout machine, there is a collaboration between the device and the user. It is not a perfect express checkout system on its own, but there is an ability for a user to make corrections. During my observation, the machine experiencing an issue alerted the customer, "Please wait. Help is on the way." The employee standing nearby moves over, taps a few buttons on the screen, and scans a white plastic card in their hand. The issue is resolved, and the customer moves on with satisfaction.

Lab: Electronics and Switches

In week two, we reviewed the basics of electronic circuits and components, and for the first time at ITP, we were given a physical computing kit that included most of the parts needed to complete the class labs.

The various components of the physical computing kit we received in class in order to start prototyping our own electrical circuits. Some of the basic components used for this week’s lab included the solderless breadboard, LEDs, resistors, solid cor…

The various components of the physical computing kit we received in class in order to start prototyping our own electrical circuits. Some of the basic components used for this week’s lab included the solderless breadboard, LEDs, resistors, solid core hookup wires, and the Arduino Nano 33 IoT.

The electronics labs for the week had us practice using a multimeter, building circuits, adding switches, and soldering connections — and after running through these exercises, we were asked to get creative with making our own switch.

I wanted to experiment with making a puzzle switch, and ended up going with the concept of assembling a pizza to complete a circuit. The idea was to have the pepperoni toppings function as switches by placing the felt pieces (with conductive foil underneath) on top of the wires that were embedded in the felt layer of cheese.

However, after I added in all of the wires, I found out that just placing the five pepperoni switches on top of the pizza did not provide a stable connection for the LED to light up. Because the felt circles are so lightweight, I had to press down firmly on all five pieces in order for the circuit to work. (If I were to make adjustments, I would consider the possibilities of other conductive materials and wiring methods.) I ended up simplifying the design to a single switch, so that laying down a final loose piece of pepperoni closed the circuit instead.

The final touch: placing the last pepperoni lights up the LED.

The final touch: placing the last pepperoni lights up the LED.