My name is Kelvin Chow, a mechanical engineering graduate interested in pursuing disruptive healthcare applications. I completed a master's degree from the University of Toronto (2020), where I pursued research in the fields of organ-on-a-chip and bioprinting. I completed my bachelor's degree at the University of Waterloo in mechanical engineering (2017), focusing on mechanical design and mechatronics. Please feel free to browse my disparate collection of projects, all connected through the theme of questioning the standard.
The human body is dynamic, complex, and extremely challenging to study. A lot of what we know about how the human body functions is through studying biology on a cell level. Interpolating from cell function to human body function is a huge jump, which avoids the importance of tissue and organ-level functions. Studying cells in a petri dish is not an accurate model for the human body. Neither is animal testing, which has poor success translating to human clinical relevance.
The purpose of organ-on-a-chip technology is to provide a better in-vitro model to study cells by recapturing relevant tissue and organ level functions. These custom engineered models introduce physiological-relevant dynamics. For example when the heart beats, it causes cells to undergo shear stresses from pulsatile blood flow and membrane straining from tissue contractions. By introduced dynamics, the goal is to define a new standard for disease and drug response modeling.
Collagen is the most abundant protein in our body, and many tissues are tubular in nature. In our lab, we had the unique ability to manufacture collagen tubes with physiologically-relevant properties. We believe these tubes are a good foundation to build around for organ models on a chip.
The low stiffness allow us to expand & collapse the membrane, similar to some human tubular tissues.
With a tubular structure, we can duplicate the uniform shear stresses from fluid flow in tubular tissues.
These tubes are challenging to physically manipulate (we use the analogy of handling wet toilet paper) and we needed to come up with a hosting device to perform cell culture experiments. We developed a device with 4 access ports to the luminal (interior) and abluminal (exterior) space of a 2cm long collagen tube segment. The device was made using clear acrylic for imaging access. All components were autoclavable.
With such a small internal volume inside the tube, there was not enough cell culture media to sustain long term cell culture inside our tubes. To overcome this problem, we came up with a rocking platform to recirculate media from reservoirs attached to the ports of our device. To run multiple experiments at a time, we developed this seesaw to hold 36 chips inside an incubator, allowing for intermittent gravity-driven perfusion.
In human physiology, barrier tissues are a permeable membrane that regulate transport of gases, nutrients, drugs, and others.
One method for studying tissue barriers is culturing cells on Transwell inserts in a well plate. This is a porous plastic insert used to create two compartments inside a well plate. A common experiment to check for barrier integrity is a trans-epithelial electrical resistance (TEER) measurement. Two probes are inserted on both sides of the insert and the electrical resistance across the membrane is measured.
In our custom tube hosting device, we could not use commercial TEER equipment, which could only be done in traditional well plates. Thus, we needed to adapt the TEER principle and develop our own system to monitor barrier tissue function in our devices.
On the right, we developed our own electrode probes that insert into the reservoirs of our device. We supply a voltage from a function generator and measure the voltage drop across the tube membrane to measure TEER across our tubular membrane.
Human bronchial epithelial cells were seeded and cultured inside our tube lumen and TEER measurements were taken over 2 weeks. As the cells proliferated to create a complete barrier, we see a gradual increase in the membrane resistance. At day 11, we start to see a plateau, indicating complete cell coverage inside our tube. We later demonstrated the utility of this TEER setup to study barrier function, where we exposed our tubular tissues barriers to various drugs and stimulants.
Quake valves played an instrumental role in the growth of microfluidics research. Through the deflection of a soft, flexible, silicone (PDMS) membrane, it allowed for the integration of on-chip miniature valves and pumps for precise fluid manipulation. We attempted to recreate the Quake valve with our collagen tubular structures, a natural substrate more suitable for organ-on-a-chip modeling.
Two collagen tubes were hosted in an acrylic device in cross-configuration. We flow fluid through the red tube. Using a solenoid valve, we toggle the blue valve between a high and low transmural pressure. At low pressure, the blue tube is collapsed allowing unrestricted flow in the red tube. At high pressure, the blue tube is expanded to block flow.
(a) Collagen valve junction viewed under brightfield microscope. (b) Red tube perfused with fluorescent tracer solution. Imaging window(Q in far left render) viewed under fluorescence microscope and images taken. (c) Sequential images from fluoresence microscope analyzed to find instantaneous flowrate in channel using particle imaging velocimetry (PIV) software.
The extension of the Quake valve is to use 3 in series to create a peristaltic pump to recirculate fluid and generate a net flow.
A modified acrylic device was designed to host 3 control blue tubes to recirculate flow within the red tube. These tubes were were actuated in a 6-phase pattern, 0.75 seconds apart.
The collagen pump device viewed under brightfield microscope.
Using the same PIV technique in the previous section to measure instantaneous flow, we plot the pump profile here. The net recirulatory flow in this pump was about 10 µL/min.
This re-circulatory, pulsatile pumping strategy mimics the flow patterns found in human organ systems.
THE PROBLEM: For most people, interacting with digital devices is easy and second nature. Input devices such as a phone touchscreen, a computer mouse/keyboard, or a TV remote are require little thought or concentration to use. However, these devices are not designed for everyone. People with impaired motor control or impaired vision, have a very difficult time using these devices, which require fine motor control. Commercial assistive technology are often expensive, limited in options, and are not generally designed with these user in mind. Everyone is unique in how they move, making it challenging to design a user-friendly, all-encompassing solution.
THE GOAL: To design low-cost, inclusive assistive technology for individuals with cerebral palsy to help them interact with digital devices easily.
THE DREAM: To close the digital accessibility gap between the general population and the cerebral palsy population. With this device, we want to give individuals with cerebral palsy (and other disabilities, too) greater individuality and be more capable of making positive contributions to society.
This project was a 2 month sprint in collaboration with Hackaday and the non-profit organization, United Cerebral Palsy Los Angeles. More details can be found on the Hackaday project page.
One thing we learned about the UCPLA community is the emphasis on visual arts. The UCPLA art studio allows users to paint with different body extremities (head, arm, leg, etc.) with the help of 3D-printed adaptive painting tools (see right images).
Inspired by watching UCPLA artists paint, my thought was: could we capture these motions and apply it to use digital graphics software? Using 2 M5StickC microcontrollers as "floating joysticks", I used these "joysticks" to replace a computer mouse for drawing images in Microsoft Paint. More details can be found in this log.
If I can use this device, then it's kind of cool. If an individual with cerebral palsy can use this device, then it becomes useful. Step 1 was to find a simple application suitable for testing. For this, I chose a computer maze game requiring 4 directional inputs (keyboard arrow keys). This application should be familiar for users who operate motorized wheelchairs with the ubiquitos 4-direction joystick. The left video shows me playing this game with two extremities, demonstrating robustness of this device. The right animation is from the first remote user testing session within a UCPLA residence, where our first user was able to successfully use this device, too. It was amazing to see how engaged the user was and also provided valuable learning lessons for future device iterations.
(a) A new 3D-printed enclosure was designed to house the (b) PCB from the first prototype. The unnecessary components (small battery, LED screen, etc.) were stripped from the original board and new electronics were added. This included a (c) vibrating motor for haptic feedback, (d) increased battery capacity for 9 hour runtime, and (e) a large, raised, CP-friendly button.
An objective of this project was to design a "universal" remote control, meaning with one input device, someone could control many electronic devices. For an initial proof-of-concept demo, this gesture control remote was used to emulate a Roku remote through a smart hub (Home Assistant installed on a Raspberry Pi). More details can be found in this log.
To wrap up this project, the idea was presented to the UCPLA panel, which can be seen in the video on the left. In addition, 3 prototypes were shipped to UCPLA for further user testing.
This project is currently still under development, with more the focus on gathering more user testing data with other users with cerebral palsy. The software portion of this project was rushed and left much to be desired. Hopefully in a future iteration, this can be improved upon.
Limitations of Initial Prototype: During the initial hardware development phase of the gesture control project, it was clear during the user testing sessions that the initial signal processing of the IMU data was not adaptive enough to meet the needs of each unique user in the cerebral palsy community. Seeing our device used by others showed us that we were still forcing gestures that seemed unnatural. There was also a limitation that with our initial approach, we would not be able to scale the number of output gestures, which would have been limited to about 10 gestures.
New Approach: There was a thought that implementing a machine learning model would have been complementary for our application. After skimming the TinyML textbook, we thought integrating this was feasible. We saw that TinyML could be the enabling technology that allowed for a high degree of user configuration, perfect for a community were user's abilities and needs are one-of-one.
Introducing flourish, a project aimed to democratize digital accessibility to help the cerebral palsy community gain individuality and the ability to flourish. This project was selected as a world finalist for Microsoft's Imagine Cup 2021.
This map shows the overview of our platform. Apologies for the oversaturation of Microsoft product ads, this was for a Microsoft competition.
To use our platform, a user wearing the gesture device would go to our gesture training web application where they would be asked to perform various gestures. The recorded motion data would then be sent to our training portal where we would train a convolutional neural network to identify these gestures.
We then output and deploy a fully trained model back to the gesture device through a firmware update.
This device would then ready to be used offline as a digital control interface.
Once enough gesture training data was collected, we used this data to train a supervised machine learning model, which was a convolutional neural network. The input dataset we ended up using was just the accelerometer (X,Y, and X) time series data logged by the IMU on the device.
For the initial demo application, we finetuned the model to train 31 different gestures (26 letters of the alphabet + a few extra gestures). On the right, you can see the accuracy of the validation dataset for each individual letter. For transparency, we recognized early on that some letters were very similar (D&P, E&F, X&Y, etc.). Our workaround was training a gesture of mixing lower case and upper case letters to improve model accuracy.
This trained model is uploaded back onto the gesture control device for real-world use.
After 2 weeks of data collection, we obtained an overall accuracy of 96% for 31 gestures, making it likely that the number of gestures could be increased further.
With a trained English alphabet on our wrist, the first thing any reasonable human being would do is spell out hello world. Not going to lie here, this worked much better than expected. Didn't even have to fake it.
Speech impairment is common for people with cerebral palsy, and one instance this platform can help is with a personalized dictionary of phrases communicated through gestures.
Microfluidics involves the manipulation of fluids in devices with microscale channel dimensions, enabling precise control at small volumes. This field has gained popularity due to its wide range of applications, particularly in biological research. Microfluidic devices are biocompatible, and their small volume requirements are advantageous since many biological samples are expensive and difficult to obtain in large quantities.
One common microfluidic device is a droplet generator. In its simplest form, two immiscible fluids, such as oil and water, are fed into a device and meet at a junction to create micro-droplets. The left image shows a schematic of this setup, while the right image provides a close-up of the junction where oil and water meet to form water droplets (video from Elveflow).
During my master's studies at the University of Toronto, I was fortunate to conduct research in the field of microfluidics. Additionally, this research extended to bioprinting, intertwining 3D printing and biology for promising applications in healthcare. This section focuses on the use of bioprinting and microfluidics, showcased through side projects in molecular gastronomy.
Molecular gastronomy is where food meets science to flip traditional culinary practices. With the same ingredients, the perception of taste can be altered through molecular gastronomy. Here, we focus on using microfluidics to generate micro-scale edibles. The preparation of 3 different dishes will be shown. (Please be prepared to exercise some level of imagination... Thanks.)
For each dish, a different microfluidic device is used. All 3 devices are flow-focusing devices to create alginate gels, dependent on the reaction between sodium alginate and calcium chloride (CaCl2). The two solutions create an ionic reaction, producing an edible, tasteless calcium alginate gel. All devices contain a junction where alginate-filled channel is sandwiched in between two other channels. By controlling the flowrate ratios, the dimensions of the gels can be tuned.
The first step is design. Here, a 3-layer microfluidic device is shown. The top and bottom layers are hot embossed, while the middle layer contains laser-cut through holes to connect channels between layers. The black holes are through holes, the green patterns are alginate channels, and the light blue patterns are oil channels. Fluid enters the device at the centre and exits at the 16 outlets located at the device perimeter.
Two moulds are required to create the top and bottom layers. (a) A PMMA master is micro-milled. (b) An epoxy resin is poured onto the master. (c) After setting, the stiff elastomeric epoxy is peeled from the master to create the mould.
The hot embossing process is shown in the schematic, where a mould is pressed against a PMMA slab. The right image shows the resulting embossed PMMA layer, a replica of the PMMA master.
After two PMMA layers are embossed and the middle PMMA layer is lasercut, the three layers are bonded together via solvent bonding. Pipette tips are epoxied around the two inlet holes, used to connect to external tubing.
Alginate and oil are loaded into 2 syringes mounted on a syringe pump. Tubing is connected to the device, which is placed in a CaCl2 bath for crosslinking alginate.
Fibers are pulled out of the bath for examination. Fibers of different diameters can be extruded depending on the inlet flowrates.
Falooda is a sugary vermicelli-like fiber. Here are our fibers on some ice cream with an attempt to make an Indian dessert, kulfi falooda. For sanitary reasons, the fibers were extruded differently for consumption.
Bucatini is a tubular pasta made from an extrusion process. With a PDMS microfluidic device, we can produce candied micro-bucatini (alginate-based tubes with sub-millimeter dimensions). Refer here for details on the microfluidic device. Apart from shrinking food to alter food texture, a dish can be elevated through presentation. Through the controlled deposition of our bucatini extrusion, we can entice with presentation, psychologically enhancing the perception of taste.
Our bucatini maker consists of three syringe pumps (1) that are connected to a microfluidic device (2). The bucatini extrudes from a glass tube (3) and onto a plate fixed to a moving XY stage (4). All components are controlled through the custom GUI (5).
Here, we show the ability to deposit our candied bucatini in an alluring spiral pattern. You will have to trust me that these are indeed tubes.
Spherification is a technique used in molecular gastronomy to create spherical gels. Using a flow-focusing microfluidic device, alginate-in-oil emulsions can be formed to make alginate droplets and later crosslinked to form gels.
This 3 layer laser-cut PMMA device has 5 junctions. The flow focusing junction is a standard design, but is hidden in this design where the microfluidic channels are in the shape of petals. The flower is a stylized Bauhinia blakeana, featured on the flag and emblem of Hong Kong.
On the left, one of the junctions is shown making mono-disperse 200 micron diameter alginate droplets. The other phase is oil, flowing 20 times faster than the alginate phase to form droplets. The alginate droplets exit the device and enter a CaCl2 bath, where the alginate immediately gels to form "caviar".
Moore's law drives technological advancement to make electronics smaller. However, the cellphone, the most prevalent electronic device, defies this trend. Why is this? Once touchscreens became practical, it created a trade-off between convenience and practicality. It is more convenient to carry around a smaller phone, but large electronic displays offer more utility. Arguably, the main difference between a cellphone, a tablet, and a laptop is the size of the electronic display. Electronics with reconfigurable displays would maximize the convenience and utility benefits of these devices.
The common approach in the flexible display research area is to use semiconductor fabrication with the key difference being swapping the silicon substrate for flexible, stretchable substrates (see figure below). The key issue with this approach is once the substrate is stretched, the pixel density decreases, reducing the image quality. Also, this approach is dependent on the substrate's mechanical properties, which diminish when stretched. This project proposes a different approach to this problem.
The first design proposed a solution where 16 individual square "islands", each representing an LED, would move uniformly. Each island was constrained to move linearly on a track and connected to other islands via guide arms. By moving one island, all islands would move simultaneously.
Two major problems of this design forced a redesign. First, this design required extra space to function, defeating the purpose of having a compact design. Second, this design shared the same flaw as the flexible substrate approach, which did not conserve pixel density when stretched.
From left to right, the figure shows the transition from collapsed to expanded state. Depending on the direction of the applied force, the prototype can expand to different aspect ratios. This figure shows expansion in two directions for uniform scaling.
(a) After finishing the design, all the components were 3D-printed. (b) After assembly, the prototype was built, and demonstrated the ability to expand and collapse. (c) LEDs were wired so each level of LEDs could be independently controlled. (d) One LED was placed on every island.
(a) A 2x2 version of the prototype could be expanded and collapsed to either a 2x3 or a 3x3 state. (b) With the current manufacturing method of 3D printing, the prototype could be scaled down to under a centimetre in height. The device height to width ratio remained the same with scaling. (c) Pixel density was conserved in different device states.
This design could shrink further with MEMS fabrication. However, the limitation of this design was the large height-to-width ratio. A future iteration of this design should focus on flattening the cube-shaped prototype. Nevertheless, this design showed a different, counterintuitive approach to flexible displays.
*Figures from Qaiser et al., Adv. Mater. Technol. (2018).
Individuals suffering from spinal cord injuries (SCI) often have balance instability caused by a reduced neural control of lower-limbed muscles. Rehabilitation tools have been demonstrated to be effective in repairing the disconnect between brain and body. Two rehabilitation tools of interest are visual feedback training (VFT) and functional electrical stimulation (FES) therapy.
Wii Fit is a commercial example of visual feedback training. Training games can be used to strengthen muscles and improve balance.
MyndMove is a commercial example of functional electrical stimulation. Surface electrodes are placed on the body to activate targeted muscle groups to generate functional movements.
In this project, we propose a balance rehabilitation tool that combines visual feedback training with functional electrical stimulation therapy. By combining an engaging training game with stimulation therapy, we hypothesize that our balance training tool will amplify and accelerate rehabilitation compared to single-modal approaches.
Just as the Wii Fit can serve as a personal trainer, a fully developed version of our proposed system can have the same at-home functionality for individuals requiring attention for balance rehabilitation. We can use this system as a platform for SCI patients to rehab daily in the comfort of their home while giving therapists access to training data for monitoring.
One training session included 4 games played 3 times each. The games were designed to emulate a collection of muscle strengthening exercises for balance rehabilitation. Each game displayed a transient coloured dot on the screen, displaying the individual's centre of position on the force plate. Game performance was measured by assigning game scores.
TARGET: This game trained an individual to stand as still as possible in the optimal balance position. The objective was to keep the red dot as close to the centre as possible.
HUNTING: A randomly spawning blue target appeared and the goal was to move the red dot inside the target and holding the position for 3 seconds. Once successful, a new target would appear in a different location. This game trained muscle reflexes by inducing constant shifting of body weight.
ELLIPSE: A target followed an elliptical orbit at the edge of an individual's range of motion. Once in the target, the target would increment along its path. This forced the participant to keep their centre of balance away from neutral position for the whole game.
COLOUR MATCHING: In this game, the dot changed colour. The individual needed to move the dot to its matching coloured target. Once successful, the dot changed to a different colour. This game required mental processing and drastic shifts in body weight.
In addition to the games, the training session included two balance assessments to track progress of balance improvement over multiple training sessions. These assessments were standard tests, giving performance indicators that show strong correlations to an individual's balance ability.
An individual was asked to stand on the force plate with their eyes closed as still as possible for 100 seconds. During this time, a centre of position (COP) time series would record and track the involuntary sway of the individual. For these studies, three balance indicators were taken: root means squared (RMS) in the (1) anterior-posterior (front to back) and (2) medial-lateral (side to side) direction, and (3) 95% confidence ellipse of the data.
An individual was prompted by our GUI to perform the "star reaching task", where the individual leaned as far as possible without losing balance in the prompted green direction. After reaching in 8 directions, 8 data points were gathered, plotted, and joined to form an octagon. The area of the octagon gave the balance indicator for dynamic balance called dynamic balance area (or base of support).
Seven able-bodied individuals ran through the training session to test the effectiveness of the system. Three out of four games (y-axis) showed bivariate correlation between balance ability (x-axis), indicating the games were appropriately designed. For more details on this project, refer here.
One problem was for two games, ellipse and colour matching, everyone had high scores. This cluster made it difficult to distinguish an individual's balance ability with game scores. Immediately after the trial run, this problem was addressed by changing the scoring system to favour individuals who could keep their balance position closer to the centre of the game targets.
The next step would be to conduct another trial run on able-bodied participants, this time incorporating functional electrical stimulation. Once completed and further system optimizations are performed, the system would be ready for use in clinical trials on individuals with incomplete spinal cord injuries to determine the effectiveness of this rehabilitation tool.
Vehicles come in all sorts of shapes, sizes, and utilities - from sports cars to common sedans, hybrids to vintage cars, or trucks to Smart cars. While the rest of the vehicle has morphed to fit modern times, the wing mirror design has stagnated and looks comparable between most vehicles (see vehicles below).
The wing mirror design has been a major design flaw for over a century and requires a drastic design change. First, the wing mirror is a safety hazard. It creates blindspot zones, making it standard procedure for drivers to keep their head on a 180° swivel for blindspot checks before changing lanes. This flaw has existed for over a century and has never been fully addressed. At fast speeds, fractions of a a second can be the difference between life or death. The safest option is for drivers to always keep their eyes on the road in front of them. Second, the wing mirror is the only part of the vehicle that unexpectedly extrudes out from the body of the car. Although relatively small, this obtrusion can cause up to 7% of total vehicle drag during driving.
Automakers are pushing to eliminate the law requiring vehicles to have anachronistic wing mirrors. We (hopefully) preview the future of wing mirror-less vehicles.
Due to the scope of the project, the system was designed around an off-the-shelf Raspberry Pi camera board, serving as the geometric constraint for our module. To not further increase the footprint of our module, a custom PCB was fabricated and placed over the camera board. The PCB includes high power infrared (IR) LEDs for night time viewing, heater connections for de-icing, a temperature sensor, and an ambient light sensor.
The outer housing was designed to minimize drag created by this system. This camera module would be placed on the outside of the vehicle near the wing mirror. The housing prototype is 3D printed and provides a water-tight seal from the outdoor environment. A dual bandpass filter is placed in front of the camera lens to allow transmission of visible light and a narrow range on the same wavelength as our IR LEDs.
Here is a size comparison between a typical vehicular wing mirror and our camera system. After running a simulation for a standard highway driving condition, our camera system was 99.8% more efficient than the wing mirror.
On the left, the camera (large frame) and side mirror (small frame) views show our car being passed by another car. A schematic is shown on the top-right. At first, the car is in both the camera and side mirror view. When the car leaves the side mirror view, it enters the normal blindspot of our vehicle. Yet, in our camera view, the passing car is still in frame and only leaves the frame when the car has fully our vehicle. Thus, our camera eliminates the blindspot of cars in the adjacent lane.
Here is a comparison of our camera operating at night through the same highway section. The left is our camera with the infrared LED turned on, and the right is the camera with the light off. The right view is similar to the view of the side mirror. It is clear from these videos that the IR LED improves visibility at night. When the car enters the section without streetlights (or any other external light), the IR LED allows the camera to pick up the lane markings on the road. In comparison, the side mirror would have shown a pitch black view, demonstrating a driver's current dependency of external light. This camera module allows the driver to see potential vehicles with their headlights off (or moose, bears, zebras, kangaroos, or camels) that sneak up on us.
This was an undergraduate capstone project to demonstrate to the University of Waterloo community the potential advantages of a camera-based rear view system. For demo day, we showcased an interactive experience of our system on our supervisor's Porsche, demonstrating the safety benefits of our system. With the monitor placed in front of the driver, we demonstrated a driving experience without having to tilt your head to check blindspots.
Due to the limitations of time and money, the project was narrowed down to a manageable scope, demonstrating a few key concepts. Ideally, we would have put an emphasis on seamlessly integrating the monitors into the vehicle's dashboard to improve driving experience. With more time, another point of emphasis would have been coupling the camera system with haptic feedback in the steering wheel so drivers can feel their surroundings. There are numerous other ways in which technology can be used to improve our driving experience, so why not use it?
Have you ever slept through your alarm and missed an important meeting or exam? Are you the type of person that hits the snooze button 25 times every morning? Do you typically lie in bed in a depressing state of Weltschmerz, requiring all the will and strength in your mind and body to drag yourself out of bed? We've all been there. Mornings are the worst and traditional alarms clocks just don't cut it.
Introducing the solution to your morning problems. A bed that literally kicks you out of bed. The tilting alarm bed functions as a normal bed until the alarm is triggered. Once activated, a latch mechanism retracts causing one side of the bed to tilt over along with you. The bed will no longer lock in horizontal position for 30 minutes after activation, forcing you to not only wake up, but not climb back into bed. Stay woke.
Once designed, an initial prototype is built out of Popsicle sticks. This prototype allows us to visualize the vision and assist with elucidating the direction for future design iterations. For instance, pinch points become more obvious and the observed drop height and drop speed help with understanding the mechanics of rolling out of bed. Shown is lil' Matt Damon rolling out of bed in style.
After the design was finalized, a trip to Home Depot was planned. After weeks of arduous labour, the bed was built. Below are a few components that make up the bed, specifically the mechanisms involved with triggering the alarm.
Here is the bed in action. Instead of the traditional approach of waking up through sound, this bed uses the sense of (an excessively aggressive) touch. This alarm clock comes with a 100% guarantee of waking someone up. Although promising in its effectiveness, further persistence of getting this product to market would be met with too many lawsuits. BTW, if this gif had sound, you would have heard a gunshot. RIP Blue Clock (2015-2015).
Engineering and arts are contrasting disciplines. Engineering is objective and logical, whereas arts are subjective and creative. Lately, I've taken an interest in woodworking and feel like its a good medium for projects that fuse art with engineering. Through these woodworking projects, my aim is to take what I've learned in engineering for storytelling and personal expression. To begin, here were some traditional woodworking projects that I started with to learn the fundamentals of woodworking.
I wanted to work on projects that combine my engineering experiences with my new hobby of woodworking. This project was inspired the Hacoa Kiboard. I wanted to figure out how to replicate it with the addition of hiding it in a typical-looking woodworking project box. It's hard to see, but the keycaps are made from a single plank of wood, giving it the subtle detail of a continuous grain keycap set. Documentation found here.
My first foray with wood was a bird carving course just before the pandemic. That bird on the left was the outcome of that class, and I vividly remember the instructor looking at it and was like, "Oh, that looks...unique?". Can't say I disagree. While stuck at home during the pandemic, I tried a second bird. This was an attempt at a three legged crow (yatagarasu), which was probably just as "unique". I kept at it and after a few blisters carving a chess set, I felt like I was passed the "unique" phase.
I'm not great at enunciating my words. If I say "I am panicking", it sounds like "I am panda king". So here's a panda king. Or a panicking. I don't know anymore. Whatever it is, I was quite pleased with it and retired my wood carving career on a high and enrolled in a real woodworking course.
This is going to require some mental gymnastics to comprehend, please feel free to skip. My intention here was to create a super bowl for a friend's birthday gift. One might ask, what makes this glass vase a super bowl? I was trying to find a glass bowl and couldn't, and found the next closest thing which was this glass vase. Now assuming you can convince yourself that a vase is a bowl, what makes this bowl super? If you go to the subreddit called r/superbowl., you will not find any super bowls nor football-related content, but instead, superb owls. This happens to be my friend's favourite animal and there's a few owls etched onto the vase. Hence, superbowl. It's the thought that counts.
This next vase is a bit more straight forward. I have a friend who's favourite chocolate is Quality Street, which comes in a very distinct purple octagonal box. I tried to recreate this box using purpleheart wood and added some details with the help of a laser cutter and epoxy.
Photo frames are so square. The emphasis of a photo frame is the photo it holds, and not the actual photo frame itself. I wanted to enshrine the memory of the Toronto Raptors 2019 Championship in something a little less square.
A dodecahedron is made of 20 equilateral triangular faces. Starting with a blank triangular template in Photoshop, images were modified to prepare for laser etching. One-by-one, the panels were engraved and cut with a laser cutter. Once done, the panels were glued up to form a dodecahedron.