FOCUS is an exploration of, and meditation on, the act of directed attention, the fixture of concentration, the state permitting clarity, and the mathematical point of convergence. This algorithm can produce images that are elegantly minimal, as well as images featuring complex patterns that explore visual illusions and artificial space. The limited palette consists of black, white, and five colors; this both constrains possible outcomes and allows the creation of unique perceptual interactions. Inspired by the Zen art tradition of doing more with less, FOCUS does not utilize any pseudo-random number generation, and each output can be recreated by hand using simple math and estimation. Except in extremely rare scenarios, each token can provide a wide variety of visual experiences through its interactive features: use up and down arrows to adjust the pattern recursion factor, left and right arrows to adjust the midpoint count, ‘h’ to hide or show the central shape, and ‘s’ to save. FOCUS is best viewed live and full screen.
These tokens are interactice, and as a fun little bonus, I deployed a variation of the FOCUS script that automatically animates the image according to the token’s deterministic hash. Some tokens may have patterns that repeat after a short period, while others may appear to never repeat. Also, some tokens may have patterns that grow too complicated for current browsers, so a safety mechanism is in place!
Photosensitive seizure warning: these animations contain flashing colors and moving patterns.
Following are the first 18 mints from the Ropsten testnet run (in random-ish order). Click to enlarge.
FOCUS is a 1000-image long meditation, and it is the second release in my Zenerative (Zen-inspired generative art) series. It’s meant to be played with through its interactivity, and I hope it challenges attentive viewers through its various portrayals of space and its creation of perceptual phenomena. Additionally, it can be a meditation in itself for anyone that has the focus and attempts to recreate an output manually. It relies on small sets of consistent shapes, colors, and ratios, helping the outputs to remain cohesive. However, with many millions of major trait combinations, the outputs are also varied and surprising.
Perception and Illusion
It’s important to understand that our perception of the world doesn’t perfectly reflect the world—and I’m not saying this with abstract Buddhist meaning; this is science. It takes time for sense signals to reach our brains, and it takes more time for our mind to comprehend the signals. Because of this delay, our consciousness uses visual sense information to predict the world, rather than merely portray it accurately at the time the light signal reached our eyes.
Our visual acuity isn’t constant either; when our eyes focus on a point, a much more sensitive area of the retina is trained on that point (it’s called the fovea). Everything in the periphery is less acute, and in those areas, phenomena can develop as our minds make judgements about what’s being sensed.
Perhaps the most subjective element in all of art is color. Everyone experiences color in slightly different ways. Roughly 4% of the population has some type of color blindness, and theoretically, some people with tetrachromacy actually see more colors than people with ‘normal’ vision. But what is most significant is that even in the general population with ‘normal’ vision, color perception is individually subjective.
Consider “The Dress.” Viewers might make a judgement on the color of the dress based on their assumption of the lighting in the photograph. Some researchers now suggest that people can make such an assumption based on the kind of lighting they’re most used to seeing: daylight for people who are awake most during the day and are exposed to a lot of sunlight, tungsten light for those people who tend to stay up later at night and experience less sunlight.
So everyone might see colors a little bit differently, but what about a single person’s perception of color? Shouldn’t that be consistent? It’s not either. Our acuity isn’t perfect, and thin strips of color against dark or light can take on different hues. Also, highly contrasting colors beside one another can create perception phenomena based on how our consciousness ‘reads’ the whole of the image. Even something as simple as swapping two colors between foreground and background can cause our minds to interpret the colors differently.
Illusions of receding depth can be created in a flat image through vanishing point perspective, while shapes or patterns ‘drawn over’ others can create the illusion of raised height. Size diminishing or remaining constant also plays with our mind’s perception of space.
Through the use of various techniques that may be represented in the outputs, FOCUS aims to create and allow the exploration of many of these phenomena. And finally, to incorporate abstract Buddhist meaning, the exploration this project undertakes has a parallel to our ability/inability to perceive the true nature of reality—which is both literally and figurative not what we see.
A quick study in generative art will reveal a few common threads to the genre: randomness, repetition, and recursion. In FOCUS, all these things are utilized, but with the randomness of the hash overruling the others. Although there are many features that work together to create the final image, I’ve allowed for the chance that blank features will present themselves, thereby increasing that output’s minimalism.
Two features in particular can contribute the most to the complexity and minimalism of an output: Pattern and Pattern Border. Each of these features has the possibility of ‘None.’ Also, the midpoints and pattern recursion factor can each be ‘1.’ That would result in a more minimal image as well, but with the interactivity of the project, these traits are easily changed.
If the deterministic hash results in one of a token’s Pattern or Pattern Border traits to be ‘None,’ however, interactivity will have a less substantial change. If they’re both set to ‘None,’ then the resulting image will not react to increasing or decreasing either the midpoint count or pattern recursion factor.
Nonetheless, some very minimal images will be minted from this project–possibly the most minimal that the Art Blocks platform has yet seen. I love seeing outputs that are only a few concentric shapes, and I hope that viewers will appreciate them as well. As a benefit, the majority of minimal images will have the most dramatic response to user interactivity. Going from a pattern recursion factor of 1 to 4 will result in a hugely more significant change than would a change from 31 to 34. Similarly, changes in midpoint count are much more dramatic at low numbers.
Minimalist art focuses on the essence of the subject and reduces extraneous details. I believe that the essence of FOCUS is the central shape, which will always be present in one form or another unless hidden by user interaction. It follows then that I believe the most minimal images from this series will be the most pure in execution. I can only hope that those most minimal results end in the hands of those who appreciate this randomly emergent minimalism.
The name of the project and its concept is ‘FOCUS’. One particular meaning of focus is the practice of maintaining a singular effort, where other things may pop up and distract but ultimately are set aside to continue pursuing the object of focus. This can be very similar to meditation, zazen in particular, where an unruly mind may populate thoughts and the practitioner must set them aside and refocus their attention. I believe that the project in itself is quite Zen for that reason, and in addition, I’ve identified a few aspects that follow common Zen art aesthetics and Zen philosophy.
The outputs are entirely and directly determined by the token’s hash value, and no computer-generated pseudo-random numbers are utilized. There’s a simplicity in that, and a truth. Outputs may be incredibly intricate or extremely minimal. The code only alters minor details in scenarios where a potentially concerning symbol may appear. Other than that, the hash creates the image in a straightforward way. By avoiding the use of additional randomness, the images can be accurately recreated manually. Even the most complex outputs can be reproduced by hand with a print-out of the algorithm, rulers, a calculator, and art supplies. The color names of the palette are meant to reflect this notion; they’re all real things, like pens and markers. I feel that undertaking a recreation would be a wonderful Zen experience (once you get past the challenge of reading code); one step after another, breath after breath. Like a spiritual practice, building as you go, little by little.
It has the Zen aesthetic of striving to create more with less. I wrote a short essay about this (and also on the next topic in this list), so I won’t go into discussion about the aesthetic here. I will highlight one example though: the project has a very limited, specifically chosen color palette. The use of the colors is sometimes sparse – many tokens will end up being only black and white; or black, white, and gold. Even on those tokens, and certainly in many others, interesting perceptual phenomena are created through color and shade interaction. Some phenomena can be appreciated across tokens as well: the green in one may look different than the green in another—even though it’s the exact same shade of green in all tokens that have green. This restriction of color thereby opens new possibilities of personal experience with the project.
The outputs explore artificial space. Again, not to go into too much detail that I cover elsewhere, the images that this algorithm creates include their own unique world—one both separate and different from the one we experience with our senses. Foreground and background, complexity and minimalism, transparency and solidity, and implied depth are expressed in the outputs through various trait combinations. Through interactivity, this artificial space can be explored within almost all tokens by changing the pattern recursion factor or midpoint count. The ‘default’ image of a token is not the true nature of the token; it’s only the surface of its identity.
Finally, FOCUS outputs are cohesive as a group, even though that group can conservatively have many millions of unique major trait combinations. To me, that dichotomy of cohesiveness and variety is very Zen. It’s like people—billions of separate individuals but cohesive in so many ways. Outputs can certainly appear very different at first glance, with different aspect ratios, central shapes, and main color features. But there’s more that holds these together than the millions of trait combinations that separate them. For example, The sizing of elements in each token are consistent through all tokens: the ratios of line thickness, the treatment of vertices, the perceptual color ranges experienced, and consistency in structure.
Technical Info. & Examples
Assigning Main Features
Each unique hash looks something like this: 0x45f551cec331bbeab4fb641525e992cf5ba01be54d7d4e92ff8cc12893100318 (this is Ropsten token 0’s hash). It’s a hexadecimal number, and FOCUS pairs up digits and stores them for use. The hash is also broken into small pieces that are in turn converted into binary digits.
The hex pairs determine the project’s main features, while the binary digits are used for binary choices that some of the features have to make, ex: on/off or left/right. Each hex pair is only used once, but the binary digits may be used many times, creating a minimal pattern to the trained eye.
The main features determined by the hash* follow. (Note, examples are covered in more detail later.)
Shape. Refers to the central ‘focus’ of the image and dictates the aspect ratio.
Lines to Recurrence
Shape Recursion Factor (1, 2, 3, 5, 9, or 27)
Circles at vertices
Squares at vertices
Perpendicular from Border
Lines to Recurrence
Staggered Lines to Recurrence
Binary Override Lines
Binary Override Fills
Pattern Recursion Factor (1, 2, 4, 8, 16, or 32)
Midpoints (1, 3, 7, 15, 31, or 63)
Additionally, if the stars align to create a token with specific features from the hash, a pair of new traits could be created. These each have far less than a 1% chance of occurrence, so they may not occur at all. More information will be released after the project is fully minted.
*There are a few feature overrides that are performed. For example, if the shape’s fill color is the same as the linework color, then any linework inside the shape would be hidden, so the algorithm adjusts the features that were dictated by the hash so that lines aren’t drawn unnecessarily. This also means that the features displayed with the token will be more representative of the final image than what the hash originally called for.
The first section of the hash is used to determine the central focus or shape, and this determines the aspect ratio of the resulting image. These are all examples from Ropsten testnet.
Before anything is set to canvas, some parameters are set. One that’s particularly important is the line thickness (‘stroke’ in digital speak). Visible in almost every output, the stroke is definitely the most common element in FOCUS, and it needed to have just the right look.
tl;dr: Stroke thickness is set to the largest of the final image’s dimensions (width or height) divided by 375. Feel free to jump to the next section, unless you’d like to understand more about my approach to programming FOCUS.
Stroke thickness and why I didn't refactor a bit of code:
I do most of my planning on paper, and I use graph paper notebooks that I buy at the store for $0.25 each. In them, the largest square I can draw with the graph lines is 7.5 inches. As an old-school photographer, my brain works in 300 pixels/inch, so that square from my notebook translated digitally would be 2250 pixels square.
To make the algorithm resolution independent, I set a ‘unit’ (u) based off the largest dimension of the canvas. “Divide by 300,” says my brain, and so u in FOCUS is set to the largest dimension divided by 300. In my planning notebook, 7.5 inches is 2250 pixels, and then dividing that by 300 is of course 7.5–but now it’s 7.5 pixels, not inches.
FOCUS is meant to be a bridge between digital and analog, between screen and reality. With this in mind, I wanted the stroke to be the same thickness as a bold drawing pen…thicker than my usual note-taking pen (Le Pen at 0.3mm). My favorite bold pen is the Sakura Pigma Micron 08, which has a line thickness of a half millimeter. 0.5mm is a few ten-thousands short of 0.02 inches, so rounding up, the target for the linework on my paper-turned digital canvas is 0.02 inch-thick lines. To turn that into pixels based on 300 pixels per inch, multiply 0.02 inches by 300 pixels/inch, and you get 6 pixels.
So, in this example, u is 7.5 pixels, but the target is 6 pixels. Simple enough, the algorithm sets the stroke for the linework to be equal to u x 0.8. Done…but later, I considered refactoring the code to just set u to 6. Instead of dividing by 300, the program would just divide by 375, and then stroke could simply be set to u. Very little else would have to change, and it might shave a couple bytes off of the filesize (which is a big consideration because uploading files to the Ethereum blockchain is very expensive).
Ultimately, I decided against refactoring this particular element because I like showing the “artist’s hand” in my work. In this case, it’s the “artist’s mind,” and because this project is not only digital but also generative, I think the artist’s humanity is even more important to include. Also, as a bridge between analog and digital, it just makes sense to me to work with a u that’s calculated off of 300 pixels/inch. And so, it remains, and stroke is set to u x 0.8.
The stroke can be seen as the white linework in the image above. While the center of the image appears to recede, line thickness remains constant throughout, creating a slight cognitive dissonance effect (if it were a real scene, the lines would grow thinner as they moved toward the center).
It’s hard to call one element of all the tokens the ‘background,’ because depending on the image created, it could refer to different things. Instead, I call the first thing drawn the ‘pattern,’ and I refer to the object it’s drawn on as ‘paper.’
The algorithm next calculates most (or all) of the points required to draw the entire image. This not only includes the points on the perimeter of the pattern and shape, but also each and every midpoint. In certain scenarios, other points are calculated later, but the bulk of the calculations happen before any image is set to screen.
The pattern is a rectangle, and three of the shape options are also rectangles that are parallel with the pattern edge, so they’re all treated similarly. The diamond shapes are treated slightly differently, but much of the same code can be repurposed for them. The circle and eyes however need specific code to calculate their points.
Also, the pattern and shape can each have recursion factors that require additional instances of all their points to be determined. The code processes all of these before moving on.
The pattern has two main visual stylizations: the border of each ‘recursion’, and an interior style that’s created between recursions. There are five choices for the border:
- Solid Line
- Dashed Line
Note that for these examples, I only selected tall tokens from Ropsten testnet mints.
These are created at all midpoints and corner points of the pattern (with the exception of the vanishing points when recursion factor is 1). In the case of Circles and Squares, the size of these shapes diminish with recursion, but the thickness of their lines do not. This means that larger shapes may show negative space within them, while recurrence shapes will trend toward appearing as though completely filled.
The pattern style is also created at this time. This feature has many options, and these options each have a high variance of results because they interact with both the pattern recursion factor and the midpoints value. Here are some examples that occurred in final testing.
- Perpendicular from Border
- Lines to Recurrence
- Staggered Lines to Recurrence
- Binary Override Lines
- Filled Quads
- Checkerboard Fills
- Binary Override Fills
- Triangle Fills
Note that for these examples, I only selected square tokens from Ropsten testnet mints.
Drawing the Shape
The outline of the shape is always drawn first, and unless the shape style specifies that there should not be a fill, a fill is added. All styles besides ‘Solid Fill’ require additional code.
The possible styles:
- No Fill
- Solid Fill
- Alternating Fills
- Lines to Recurrence
It’s important to note that not all of these styles are consistent between shapes. For example, the ‘eyes’ have styles that are handled very differently from all other shapes. Lines to Recurrence, Alternating Fills, and Lens are different. Diamond shapes also behave differently in those same styles, but additionally, a Wide Diamond with one of those styles will be treated differently than if it was a Tall Diamond. The visual results for each are not the same when rotated, for example. They’re completely different!
Note that for these examples, I only selected wide tokens from Ropsten testnet mints.
*Shapes with the ‘No Fill’ feature can vary significantly in appearance based on the shape’s recurrence factor and the way in which that shape’s recurrence points are calculated. The following image is No Fill with a high recurrence factor, and due to the pattern’s recurrence behind the wide diamond, a curious image is created.
It was very important to me to create an algorithm that created static images, but because the technology offers so much more, I wanted to provide that as well. The interactive features include:
Scale responsive. Live view of these tokens will automatically update when the constraining window is resized. That may not seem like interactivity in the typical sense, and you’re right. But once you find a token with tight lines, you’ll be able to play with perceptual phenomena when scaling the image.
Up/Down arrows. These will increase and decrease the pattern recursion factor and call for the image to be redrawn. Note that the features will always represent the not adjusted token. These changes can create dramatic differences in the resulting image!
Left/Right arrows. These adjust the midpoint count, and just like when adjusting the pattern recursion factor, the different results can be dramatic. Here are three examples again, with the hash-determined midpoint count in the center, then examples of reducing the midpoints by 7 and increasing by 7.
‘h’ to hide/show the shape. I decided to do this to allow the pattern to be appreciated in it’s minimal or complex glory.
‘s’ to save the image. Simple as that, but the file name is going to be long! Because these tokens are human re-creatable, I wanted to include the deterministic hash in every saved file’s name. Also, the token number is important both to be able to ID the token, but it’s also used in a trait! I hadn’t mentioned before, but whether the ID is even or odd determines which orientation the Triangle Fills use. Finally, because the above interactivity can vastly change the resulting image, the pattern recursion factor, midpoint count, and shape true/false is appended to the name.
Each of the following rows of images were made from the same Ropsten testnet mint. The first image in each set is the original mint.
It might take significant effort, but each output can be recreated by hand using only the token hash, a copy of the algorithm, some basic math, simple estimation, and a lot of patience. Here’s a work in progress of FOCUS #0 (Ropsten Test Network)!