Why @@ is the best attribute syntax for PHP

Update 2020-09-02: The Shorter Attribute Syntax Change proposal was accepted, with #[Attr] voted as the final attribute syntax for PHP 8.0.


Analogous to docblock annotations, attributes make it possible to apply metadata to a class, function, or property with a native syntax and reflection API.

The Attributes v2 RFC (accepted on May 4, 2020) added support for native attributes in PHP 8 using the <<Attr>> syntax borrowed from Hacklang. In June, I authored the Shorter Attribute Syntax RFC to propose using @@Attr or #[Attr] to address five shortcomings of the original syntax (verbosity, poor readability of nested attributes, confusion with generics and shift operators, and dissimilarity to other languages). The result of this RFC was that the @@Attr syntax was accepted and implemented in PHP 8.0 beta 1.

However, several people were unhappy with this outcome, and proposed a new RFC to vote again on the declined attribute syntaxes (as well as a couple new alternatives) in hopes of a different result. This RFC puts forward four main arguments for changing the syntax: grouped attribute support, consistency, forward compatibility, and “potential future benefits”. Are these really good arguments?

Attribute grouping

Grouped attribute support (e.g. <<Attr1, Attr2>>) was added to partially reduce the verbosity of the <<>> syntax. However, this has the downside of creating two different syntaxes for declaring attributes. Furthermore, grouped attributes result in unnecessary diff noise when adding or removing a second attribute on its own line:

function foo() {}

// changes to
    OtherAttr([1, 2, 3]),
function foo() {}

In contrast, with the @@Attr syntax, individual attributes can always be added or removed without having to change attributes on other lines:

@@OtherAttr([1, 2, 3]) // can be added/removed independently
function foo() {}

Finally, the @@Attr syntax without grouping is equally concise as the alternatives with grouping, so this is not a reason to prefer any other syntax over @@Attr.


The RFC argues that an attribute end delimiter is necessary for consistency with other syntax constructs in PHP. However, attributes are not standalone declarations, but modifiers on the declaration that follows them, similar to type declarations and visibility/extendibility modifiers:

// declaration modifiers do not have end delimiters like this:
[final] class X {
    [public] function foo([int|float] $bar) {}

// the @@ syntax is consistent with other declaration modifiers:
final class X {
    public function foo(@@Deprecated int|float $bar) {}

The RFC responds to this by arguing that attributes are more complex than modifiers and type declarations since they have an optional argument list. However, this fails to recognize that an attribute’s argument list already has its own start/end delimiters (parentheses)! So adding another end delimiter if anything reduces consistency rather than improving it.

The @@Attr syntax is consistent with existing declaration modifiers in PHP, as well as docblock annotations and other languages using the @Attr syntax.

Forward compatibility

The #[Attr] syntax could provide a temporary forward compatibility benefit to library authors, who would be able to reuse the same class both as a PHP 8 attribute and to store information from a docblock annotation when the library is used on PHP 7. However, this benefit will be irrelevant once most of the library users upgrade to PHP 8, and the library wants to take advantage of any other PHP 8 syntax.

Even without partial syntax forward compatibility, a library can support both PHP 8 attributes and PHP 7 annotations with a small amount of extra work. A parent class can be used to store/handle annotation information, and a child class can be registered as an attribute for PHP 8 users.

The downside of forward compatibility is that it can result in code that is valid in both PHP 7 and PHP 8, but runs very differently on both. For example:

class X {
    // This comments out the first parameter entirely in
    // PHP 7, silently leading to different behavior.
    public function __construct(
        #[MyImmutable] public bool $x,
        private bool $flag = false,
    ) {}
$f1 = #[ExampleAttribute] function () {};

$f2 = #[ExampleAttribute] fn() => 1;

$object = new #[ExampleAttribute] class () {};

// On PHP 7 this is interpreted as
$f1 = $f2 = $object = new foo();
// This example echoes the rest of the source code in
// PHP 7 and echoes "Test" in PHP 8.
#[DeprecationReason('reason: <https://some-website/reason?>')]
function main() {}
const APP_SECRET = 'app-secret';
echo "Test";

Is the temporary forward compatibility benefit (which realistically will only simplify code slightly for library authors) really worth the downside of a larger BC break and risk of valid code being interpreted very differently across PHP versions?

Potential future benefits?

Lastly, the RFC argues that an end delimiter could be helpful for enabling future syntaxes such as applying a function decorator:

#[fn ($x) => $x * 4]
function foo($x) {...}

However, there are other potential ways to accomplish the same thing that would arguably be even more readable and flexible:

// via a built-in attribute:
@@Before(fn ($x) => $x * 4)
function foo($x) {...}

// via a syntax for checked attributes:
function foo($x) {...}

Whether the attribute syntax has an end delimiter or not, it will be possible to extend functionality in the future. The @Attr syntax has been proven over many years through its use in docblock annotations and other languages, and if it was deficient in some way it almost certainly would have been discovered long ago.

The case for @@

The @@Attr syntax arguably strikes the best balance between conciseness, familiarity with docblock annotations, and a very small BC break. Unlike the #[Attr] and @[Attr] proposals, it does not break useful, functional syntax.

The lack of a separate end delimiter is consistent with other declaration modifiers, and can help avoid confusion when both docblocks and attributes are being used. The syntaxes with an end delimiter make attributes appear as if they are standalone declarations which a docblock can be applied to, even though they are not.

If you read this far, thank you! Let me know if this post changed your mind, or if there is some other argument that convinces you why @@ or another syntax is best.


PolyCast: a library for safe type conversion in PHP

On March 16, 2015, something amazing happened in the world of PHP. The long-awaited, hotly debated Scalar Type Declarations RFC was accepted for PHP 7! Finally, it will be possible to declare scalar types (int, float, bool, and string) for function parameters and return values:


function itemTotal(int $quantity, float $price): float
    return $quantity * $price;

The need for safe type casts

By default, scalar types are enforced weakly. So while passing a value such as “my string” to an int parameter would produce an error, values such as 10.9, “42.5”, true, and false would be accepted and cast to 10, 42, 1, and 0, respectively. This behavior lacks safety, since any of these values are likely to be errors, and casting them results in data loss.

Enabling the optional strict mode will prevent values with an incorrect type from being passed, but this isn’t a complete solution. Whenever you are dealing with user input, whether from a posted form, url parameters, or an uploaded CSV, the data will arrive as a string. Before it can be passed to a function expecting an int or float, the data must be converted to the corresponding type.

Simple, right?


$total = itemTotal((int)$_POST['quantity'], (float)$_POST['price']);

Wrong. This is even less safe than the default type coercion! A user could pass a value such as “5 hundred” or “ten” and it would be cast to 5 or 0 without producing an error. This is especially concerning in scenarios where sensitive financial information is being handled.

PHP filters?

In the past I’ve tried to solve this problem by using PHP’s built-in FILTER_VALIDATE_INT and FILTER_VALIDATE_FLOAT validation filters. However, there are two problems with this approach. First is verbosity: validating just two inputs for our itemTotal function requires eight additional lines of code:


$quantity = filter_var($_POST['quantity'], FILTER_VALIDATE_INT);
$price = filter_var($_POST['price'], FILTER_VALIDATE_FLOAT);

if ($quantity === false) {
    throw new Exception("quantity must be an integer");
} elseif ($price === false) {
    throw new Exception("price must be a number");

$total = itemTotal($quantity, $price);

Secondly, and even more problematic, filter_var casts the value being checked to a string and trims whitespace, which results in various unsafe conversions being accepted.

Introducing PolyCast

In October of last year, Andrea Faulds proposed a Safe Casting Functions RFC to fill the need for safe type conversion. At the same time, I started developing a userland implementation called PolyCast. Although Andrea’s RFC was ultimately declined, I continued to move PolyCast forward, with a number of improvements based on community feedback.

PolyCast comes with two sets of functions. The first (safe_int, safe_float, and safe_string) return true if a value can be cast to the corresponding type without data loss, and false if it cannot. The second (to_int, to_float, and to_string) will directly cast and return a value if it is safe, and otherwise throw a CastException.

This makes safe type conversion nearly as simple as forced casts, without compromising safety:


use function theodorejb\polycast\{ to_int, to_float };

$total = itemTotal(to_int($_POST['quantity']), to_float($_POST['price']));

For more examples and details on which values are considered safe, check out the project on GitHub. PolyCast is tested on PHP 5.4+, and you can easily install it with composer require theodorejb/polycast.


High performance linked list sorting in JavaScript

Watch and chain image by Eduardo Mueses licensed under CC BY-NC-ND 2.0

In my previous post, I described a schema and set of associated queries to persist and and update arbitrarily ordered items in a SQL database (using a linked list). This approach can scale to very large lists, without degrading performance when adding or rearranging items. But having stored a list, how can it be reproduced in the correct order? This post describes an approach to efficiently sort linked lists from SQL in client-side code. While the below examples are written in JavaScript, you could use the same basic technique in almost any modern language.

Suppose that you select the following (unordered) linked list from a database:

        "item_id": 940,
        "item_name": "Second item",
        "previous_item_id": 239
        "item_id": 949,
        "item_name": "Fourth item",
        "previous_item_id": 238
        "item_id": 238,
        "item_name": "Third item",
        "previous_item_id": 940
        "item_id": 239,
        "item_name": "First item",
        "previous_item_id": null

A naive approach to sorting

First, let’s consider a not-so-efficient way to sort the linked list:

function naiveSort(linkedList) {
    var sortedList = [];
    var index = 0;
    var previousItemId = null; // first item in list has null previous ID
    while (sortedList.length < linkedList.length) {
        var current = linkedList[index];
        if (current.previous_item_id === previousItemId) {
            // found the item referencing the previous item's ID
            previousItemId = current.item_id;
            sortedList.push(current); // append to sorted list
            index = 0; // start over at first element
        } else {
            index += 1; // check the next item
    return sortedList;

The naiveSort function re-loops through the list from the beginning each time it finds the next item and adds it to a sorted copy of the array. The function will return the correctly sorted list, but the number of required iterations exponentially increases as the list lengthens, following the equation size + size * ((size - 1) / 2). For example, a list containing 100 items would require 5,050 iterations, while a list containing 1,000 items would require 500,500! With this approach, any advantage of the linked list’s efficient insertion and reordering would be lost in lengthy sort times.

Fortunately there’s a much better way.

function mapSort(linkedList) {
    var sortedList = [];
    var map = new Map();
    var currentId = null;

    // index the linked list by previous_item_id
    for (var i = 0; i < linkedList.length; i++) {
        var item = linkedList[i];
        if (item.previous_item_id === null) {
            // first item
            currentId = item.item_id;
        } else {
            map.set(item.previous_item_id, i);

    while (sortedList.length < linkedList.length) {
        // get the item with a previous item ID referencing the current item
        var nextItem = linkedList[map.get(currentId)];
        currentId = nextItem.item_id;

    return sortedList;

An efficient sorting algorithm

The mapSort function starts by looping through the linked list a single time, adding the item array indexes to a map with a key of the item’s previous_item_id property. It then follows the chain of item_id references through the map to build the complete sorted list. This approach requires (size * 2) - 1 iterations, allowing it to scale linearly with list length.

Testing with Node.js on my Core i5 desktop PC, the mapSort function was able to sort a 5,000 item list in an average of 2.3 ms, compared to 68.4 ms for naiveSort. With larger lists, the discrepancy grew even greater. Sorting 100,000 items took an average of over 40 seconds with naiveSort, but just 61.7 ms with mapSort!

There are likely other optimizations that could be implemented to further increase performance, but for most practical purposes this technique should prove sufficient.


Implementing a linked list in SQL

Recently I was challenged with enabling users to drag and drop items in a list to sort them in any order, and persisting that order in a SQL database. One way to handle this would be to add an index column to the table, which could be updated when an item is reordered. The downside of this approach is that whenever an item is added or moved, the index of every item beneath it must be updated. This could become a performance bottleneck in very large lists.

A more efficient approach is to use a linked list, where each item contains a reference to the previous item in the list, and the first item has a null reference (you could alternatively reference the next item, with the last item containing a null reference, but this requires the list to be sorted back-to-front, which I find less intuitive).

Let’s start by creating a minimal table for items:

CREATE TABLE ordered_items (
    item_name VARCHAR(100) NOT NULL,
    previous_item_id INT UNSIGNED NULL,
    FOREIGN KEY (previous_item_id) REFERENCES ordered_items(item_id)

Next, we’ll write two functions for adding items to the list: one to insert the item and the other to update any item referencing the same previous ID as the new item.


function addItem($name, $previousId)
    $sql = "INSERT INTO ordered_items (item_name, previous_item_id)
            VALUES (?, ?)";

    // The query function is assumed to prepare and execute a SQL statement with
    // an array of bound parameters, and return the insert ID or selected row.
    $itemId = query($sql, [$name, $previousId]);
    setInsertedItemReference($itemId, $previousId);

 * If another item in the list has the same previous ID as the
 * inserted item, change it to reference the inserted item.
function setInsertedItemReference($itemId, $itemPreviousId)
    $params = [$itemId, $itemId];

    if ($itemPreviousId === null) {
        $condition = 'IS NULL';
    } else {
        $condition = '= ?';
        $params[] = $itemPreviousId;

    $sql = "UPDATE ordered_items
            SET previous_item_id = ?
            WHERE item_id <> ?
            AND previous_item_id {$condition}";

    query($sql, $params);

To remove an item from the list, we will again need two functions: one to delete the item row and another to update any item referencing the removed item.


function deleteItem($itemId)
    $previousId = selectItem($itemId)['previous_item_id'];
    closeMovedItemGap($itemId, $previousId);
    query("DELETE FROM ordered_items WHERE item_id = ?", [$itemId]);

function selectItem($itemId)
    $sql = "SELECT * FROM ordered_items WHERE item_id = ?";
    return query($sql, [$itemId]);

 * If any other item has a previous ID referencing the moved item,
 * change it to point to the moved item's original previous ID.
function closeMovedItemGap($itemId, $itemPreviousId)
    $sql = "UPDATE ordered_items
            SET previous_item_id = ?
            WHERE previous_item_id = ?";

    query($sql, [$itemPreviousId, $itemId]);

Finally, we can add a function to update items (including their sort order):


function updateItem($id, $name, $previousId)
    if ($id === $previousId) {
        throw new Exception('Items cannot reference themselves');

    $originalItem = selectItem($id);

    if ($previousId !== $originalItem['previous_item_id']) {
        // the item was reordered
        closeMovedItemGap($id, $originalItem['previous_item_id']);
        setInsertedItemReference($id, $previousId);

    $sql = "UPDATE ordered_items
            SET item_name = ?,
            previous_item_id = ?
            WHERE item_id = ?";

    query($sql, [$name, $previousId, $id]);

As can be seen, whether an item is added, removed, or reordered, at most three rows will need to be updated. This keeps the performance nearly constant, regardless of the size of the list. With the basic database implementation complete, in my next post I’ll share an approach to efficiently sort the linked list in client-side code.


Interstellar starships: feasible or fiction?

Enterprise at warp” image by 1darthvader licensed under CC BY-SA 3.0

This post is based on a research paper I wrote for my Introduction to Astronomy course at Rasmussen College earlier this month.

As the Klingon ship bears down on the Starship Enterprise, preparing to fire a barrage of photon torpedoes, Captain Picard shouts “Maximum warp!” and the Enterprise leaps away towards another star system, faster than the speed of light. Is this scene from Star Trek purely science fiction, or is there truth to the concept of interstellar starships? Despite the significant scientific progress that has been made in this area, energy requirements, high costs, and the problem of time still present enormous challenges to the vision of interstellar travel as portrayed in Star Trek and other films.

The nearest star system, Alpha Centauri, is about 25 trillion miles away from Earth. At the speed of a typical spacecraft, it would take more than 900 thousand years to cover this distance (Millis, 2008a)! Clearly, one of the first major barriers to interstellar travel is finding a way to achieve the speeds necessary to reach neighboring stars within a reasonable timeframe. This would require traveling close to the speed of light, at the very least. However, the amount of energy required to accelerate a ship like Star Trek’s Enterprise to just half the speed of light would be “more than 2000 times the total annual energy use of the world today” (Bennett, Donahue, Schneider, & Voit, 2014, p. 715). Where would such vast amounts of energy come from? While many ideas have been proposed, perhaps the most feasible of these was Project Orion, a propulsion system experimented with from the 1950s to 1960s which involved continuously detonating nuclear bombs behind a spaceship to propel it forward. Unfortunately, not only would this approach make for an uncomfortable ride, but it would also expose the crew to dangerous levels of radiation (Dyson, 2002). In the words of Aerospace Engineer Marc G. Millis (2008a), “we need either a breakthrough where we can take advantage of the energy in the space vacuum, a breakthrough in energy production physics, or a breakthrough where the laws of kinetic energy don’t apply” (last paragraph).

Supposing that the energy problem were solved, and a ship could achieve constant acceleration over any distance, it would only take about five years (from Earth’s perspective) to reach the nearest star, and thirteen years to reach Sirius (White, 2002). However, as the distance, acceleration period, and corresponding velocity increase an interesting effect known as time dilation starts to become apparent. The greater the ship’s speed, the slower time will pass for those onboard. A trip which takes thirteen years from the perspective of earth would only take seven years for the travelers, and even less time at rates of acceleration greater than 1G (White, 2002). For really long trips, thousands of years could pass on Earth while the travelers only experience a few decades! While this may seem like a beneficial effect, since it would allow travelers to reach destinations much further than otherwise possible, it presents an enormous problem for interstellar travel. What would be the point of sending individuals to other stars if their work would be of no benefit to those living on Earth? Any trip to a distant destination would almost certainly be one-way.

This brings us to the speculative realm of wormholes and warp drives. The special theory of relativity forbids objects from moving faster than light within space-time, but with enough matter or energy it is known that space-time itself can be warped and distorted (Millis, 2008b). In theory, space could be warped or “folded” to connect two separate points (creating a wormhole). Unfortunately, creating the wormhole would require placing a giant ring (“the size of the Earth’s orbit around the Sun”) of super-dense matter at each end of the wormhole, charging them with enormous amounts of energy, and spinning them up to “near the speed of light” (Millis, 2008b). Even if there were some way to obtain the necessary energy and super-dense matter, how would it be placed at the destination end without first traveling there? While wormholes could hypothetically be useful for frequent travel between two interstellar destinations, they do not provide a viable solution to getting there in the first place.

What about warp drives like those used in Star Trek? While the concept may sound impossible, according to a physicist named Miguel Alcubierre space could theoretically be compressed ahead of the ship and expanded behind it, allowing a ship to travel faster than light without violating the theory of relativity (Peckham, 2012). In effect, it is space that moves, rather than the ship. Unfortunately, creating a warp drive like this would require generating a ring of “negative energy,” and whether it is possible for such energy to exist is still under debate (Millis, 2008b). Assuming it is possible, it seems like this would be the most practical method of interstellar travel. It does not require long periods of time to accelerate and decelerate, passengers would not be jolted from changes in acceleration or pelted with particles of interstellar gas, and best of all time would pass at the same rate for the cosmic travelers as well as those remaining on Earth. NASA is currently in the very early stages of investigating whether such a drive is feasible.

With all the talk surrounding the possibility of moving starships through space at faster-than-light speeds, it is easy to forget that getting the ships into space in the first place is also a problem. In an article published on Gizmodo earlier this year, it was estimated that the cost of constructing a spaceship like the Starship Enterprise using technology available today would be roughly $480 billion (Limer, 2013). Astoundingly, more than 95% of this cost is simply to transport the necessary materials to space! This illustrates the disproportionately high cost of space transportation technology as it currently exists – putting a starship into space simply does not make economic sense at this point, even if we could build one.

In short, the enormous energy requirements, high cost, and problem of time all present significant roadblocks for interstellar travel. The theories proposed for faster-than-light travel are speculative at best, and far from practicality. While a breakthrough in propulsion allowing affordable, safe, and sustained acceleration could potentially allow us to reach the nearest stars, the problem of time dilation would make it infeasible to go further. Without major scientific advances in the areas of negative energy and space-time manipulation, the possibility of visiting an alien home world appears highly unlikely in the foreseeable future.


Bennett, J., Donahue, M., Schneider, N., & Voit, M. (2014). The Cosmic Perspective (Seventh ed.). San Francisco, CA: Pearson Education, Inc.

Dyson, G. (2002, February). The story of Project Orion. Retrieved from TED:

Limer, E. (2013, May 17). How Much Would It Cost to Build the Starship Enterprise? Retrieved from Gizmodo:

Millis, M. (2008a, May 2). A Look at the Scaling. Retrieved from NASA:

Millis, M. (2008b, May 2). Ideas Based On What We’d Like To Achieve. Retrieved from NASA:

Peckham, M. (2012, September 19). NASA Actually Working on Faster-than-Light Warp Drive. Retrieved from

White, R. B. (2002, August). Space Travel and Commerce using STL technologies. Retrieved from


Google, ethics, and Internet censorship

I originally wrote this post last September as a research paper for my Business Ethics course at Rasmussen College. I decided to post it on my blog now since I still feel strongly about the issues of censorship and online privacy, especially in light of recent leaks about the NSA’s top-secret surveillance programs.

What if the websites and other content we want to access online had to first pass through a filter which determines whether or not the content is favorable to the government, and blocks it if it is deemed critical? Sadly, this scenario is currently a part of life in China. All companies and organizations that operate within the country are required to comply with censorship laws and report the activities of citizens to the government. These conditions presented an interesting dilemma for Internet search giant Google, which was forced to choose between cooperating with government censorship laws and letting another company provide search services to the Chinese. While it is important (and ethical) for international companies to adhere to the laws and regulations of nations in which they operate, if those laws come in conflict with greater ethical interests such as human rights or individual freedom it is arguably better to cease operations in the country, rather than assist the government in its suppression of citizens.

“Don’t be evil.” The phrase was supposed to embody Google’s official corporate philosophy. In January of 2006, however, the company launched and began censoring search results for Chinese users. How could a company with such a strong corporate culture practice something so seemingly contrary to their principles? The decision was not made lightly. In 2004, Google policy director Andrew McLaughlin was asked to conduct an ethical analysis with the sole purpose of determining whether Google’s presence would “accelerate positive change and freedom of expression in China” (Levy, 2011, p. 277). After nearly a year of research, McLaughlin determined that while “Google’s presence might benefit China,” the experience of working with a totalitarian government would be morally degrading to Google as an organization (p. 279). Google’s approach to this ethical dilemma demonstrated a teleological moral philosophy. In other words, they evaluated the situation based on its consequences – both to the Chinese people and their company.

Although revenue was very specifically not a consideration in McLaughlin’s report, the business prospects of entering China would have been impossible to ignore. With more Internet users than any other country, China presented an unquestionably alluring business growth opportunity. However, cofounder Larry Page remains resolute that the company was only trying to do the right thing for the people of China. “Nobody actually believes this, but we very strongly made these decisions on what we thought were the best interests of humanity and the Chinese people” (Levy, 2011, p. 280). While Page optimistically believed that Google’s services would benefit the Chinese, his partner Sergey Brin was troubled at the prospect of censorship. As a former refugee of the Soviet Union, Brin had personally experienced the burden of a communist government that imposes constraints on personal freedom (p. 274). In the end, however, Brin, Page, and CEO Eric Schmidt weighed the evil of censorship against the evil of not providing any services to the Chinese, and ultimately agreed that censorship was the lesser evil.

But did Google really have no other alternatives? This was not the case. While search may have been the most profitable of Google’s services, it was not their only service. By the time was launched, the company already offered email and mapping solutions that were quickly growing in popularity. Additionally, Google could have pursued new business opportunities that would not require censorship (such as music sales or development platforms). While this approach may have changed little from an individual and societal perspective (if Google did not censor search results in China, someone else would), it would at least avoid the organization degradation caused by working with a totalitarian regime. On the other hand, it is also possible that the Chinese people would have more freedom today if Google had never participated in government censorship. During the time was operated, the Chinese government progressively tightened Internet censorship requirements. According to human rights activist Peter Guo, China considers Google to be “one of the greatest threats” to the Communist Party (Dean, 2010). If Google had not compromised their principles, the Chinese business market might have looked less attractive to foreign investors, and the government could have been forced to reduce censorship in order to drive innovation.

From a deontological perspective, avoiding censorship at all costs would simply have been the right thing to do, whether it meant pursuing other business models or staying out of the country entirely. Even if unethical behavior is profitable, executives need to consider the kind of world they are helping to create, and whether or not that concerns them (MacKinnon, 2012, p. xxiii). Google could have continued providing unfiltered search results from outside the country, and while the Chinese government would likely block their search engine much of the time, at least it would be the government doing the censoring, rather than Google.

Google stopped providing censored search results in January, 2010, four years after launching Ironically, the incident prompting this decision was a cyberattack, not censorship. Google discovered that the Chinese government was hacking into the Gmail accounts of Chinese human rights activists and stealing their personal data (it’s not hard to guess for what purpose). According to Google co-founder Sergey Brin this was the “straw that broke the camel’s back” (Spiegel, 2010).

If there is one lesson that can be learned from Google’s foray into government censorship, it is that compromise is not necessary for corporate success, nor does it improve the lives of citizens. Google hoped that by compromising with the government, the government might eventually compromise with them, but the opposite turned out to be true. While the alternatives to censorship may not be as directly profitable, they can still lead to a net benefit since customers and stakeholders will be more willing to trust and support the company. It is also worth pointing out that if Google was prepared to censor search results for the sake of profit in China, how would they have been prepared to fight similar calls for censorship in the United States (such as the Stop Online Piracy Act)? In the end, I’m glad Google did the right thing by stopping their censorship of Chinese search results. I only wish it hadn’t taken a cyberattack for them to make this decision.


Dean, J. (2010, January 13). Ethical Conflicts for Firms in China. Retrieved from The Wall Street Journal:

Levy, S. (2011). In the Plex: How Google Thinks, Works, and Shapes Our Lives. New York: Simon & Schuster.

MacKinnon, R. (2012). Consent of the Networked: The Worldwide Struggle for Internet Freedom. New York: Basic Books.

Spiegel. (2010, March 30). Google Co-Founder on Pulling out of China. Retrieved from Spiegel Online:


Psychology in Interactive Web Design

This post is based on a research paper I authored earlier this month as part of my General Psychology course at Rasmussen College.

Have you ever been struggling with a website or application that is difficult or confusing to use, and thought, “This could be so much easier if it were designed differently”? You may be surprised to discover that there are actually psychological reasons for what makes an interface good or bad, intuitive or difficult to use. This post will explore three areas of good interface design: proprioception, Gestalt psychology, and performance – with a focus on their application to web-based interactive software.


Proprioception refers to our body’s sense of position in space, and the position of various body parts in relation to each other. Because software is inherently non-physical, designers have to provide cues to indicate the user’s position. On traditional websites, these more often than not included breadcrumbs and navigation menus. However, in today’s world of varying device types and interaction methods (including keyboards, mice, and touch), new metaphors are necessary. One solution that is growing in popularity is to provide transitions between various screens (Bowles, 2013). By convention, leftward movement is seen as backwards, while rightward movement is seen as forwards or progression (Ibid). Vertical movement disrupts this hierarchy and can be used for actions outside the normal app flow.

There are more applications of proprioception than just transitions. For example, considering the Gestalt principle of similarity (discussed later in the essay), a button that leads to a particular section of an app could be designed and located similarly to the button that exits the section. The user would then understand that there is a connection between the two buttons, with the result that they would more intuitively understand how to navigate within the app. By carefully thinking about the logical location for data and controls within an app, and supplying cues to the user to indicate their position, developers can create an experience that requires less learning and feels much more natural to use.

Gestalt Principles

Gestalt is a German word meaning “shape” or “form,” and it refers to the way visual input is perceived by the human mind (Bradley, 2010). Gestalt psychologists have proposed a number of organizational laws (called Gestalt principles) which “specify how people perceive form” (Huffman, 2009, p. 105). The following section will examine five of these principles: Figure and Ground, Similarity, Proximity, Uniform Connectedness, and Parallelism, and how each of them applies to interactive software design.

Figure and Ground refers to the way humans perceive elements as either figure (the object of focus) or ground (the background which contains or supports the figure). When two objects overlap, the smaller object is seen as the figure against the larger background (Bradley, 2010). When designing web applications, it is important to ensure that there is sufficient padding around figures in the interface, and sufficient contrast between the figure and ground – this will allow elements to stand out and make the layout easier to understand at a glance. Consider the following example: the left-hand button’s lack of color, padding, and contrast makes it more difficult to understand at a glance than the button on the right.


The Gestalt principle of Similarity states that objects with a similar appearance will be perceived by humans as related (Bradley, 2010). Similarity can be achieved through shape, color, size, location, and other properties. In interface design, this principle is useful for helping the user to understand which elements are related or part of a group. Designers must be careful, however, to avoid similarity when elements are not related, as users will otherwise perceive an unintended association and find the application more confusing to use.

A third Gestalt principle is Proximity. According to this law of organization, “things that are close to one another are perceived to be more related than things that are spaced farther apart” (Rutledge, 2009). This principle is simple yet powerful, and takes precedence over similarity (Ibid). One practical use of this law in an interface might be a multi-column layout, where each column is separated by empty space and contains a unified collection of related elements or data. If there is not enough space between separate elements, however, it will be more difficult for users to determine that they are unrelated.


According to the principle of Uniform connectedness, elements that are visually connected are perceived as related (Bradley, 2010). As a simple example, consider a speech bubble that is connected to a cartoon character by an arrow. Uniform connectedness trumps both visual similarity and proximity when determining related elements. As with the principle of similarity, this is an important tool for interface designers, but care must be taken that there are not unintended connections between separate parts of an interface (whether because of lines, colors, or other connecting elements). As a personal example, when I was recently planning the interface design for a web app, I considered using the same background color for the headings of two separate sections in the interface. However, I was never satisfied with how it looked. After studying the Gestalt principles, I realized that using the same color visually connected the two headings, causing a perception that they were related when they actually were not. This discovery prompted me to rethink my approach to the interface.

The last Gestalt principle I will examine is Parallelism, which states that parallel elements are perceived as more related than non-parallel elements (Bradley, 2010). A practical application of this to UI design could be rotating a less-related element so that its contents are at a different angle to the rest of the interface (consider the screenshot below, where the “Fork me on GitHub” ribbon is angled away from the rest of the content).



No matter how good an application’s layout, visual design, and navigation, it still won’t provide a good user experience if it performs poorly. Users expect animations to be smooth, and apps to respond immediately to their taps, clicks, and gestures. User perceptions of time often don’t match reality, however, and for developers this perception is critical. The more steps remembered in a process, the slower it seems (Tepper, 2012). Therefore, reducing the number of steps required to complete an action will make an app “feel” faster, even if the amount of time involved does not significantly change.

Research has shown that actions must take no longer than 50-100 milliseconds to feel instant (Tepper, 2012). Using an online latency demo (which at the time of this writing appears to no longer be available), I found that I personally started noticing a lack of responsiveness after about 75ms. Interestingly, users have a much higher tolerance for delays when there is an indication of progress, so if an action could take longer than 100ms it may be a good idea to give the user some type of feedback (for example, via an animated progress bar or other indicator).

On the Web, optimizing performance and providing feedback can be even more important than in native apps, for three reasons:

  1. Web-based applications will be accessed by a much wider variety of device types and performance classes.
  2. Non-optimized apps can increase bandwidth costs, especially when scaling to thousands or millions of users.
  3. Actions by one user can directly impact the performance of other users on the same server.

Understanding what makes a good interface and usable application is critical to creating the best user experience. Applying the principles of proprioception, Gestalt psychology, and performance optimization/feedback will enhance usability and create an environment that users will love to use and tell their friends about.


Bowles, C. (2013, March 7). Better Navigation Through Proprioception. Retrieved from A List Apart:

Bradley, S. (2010, January 25). Gestalt Principles: How Are Your Designs Perceived? Retrieved from Vanseo Design:

Huffman, K. (2009). Psychology in Action (8th ed.). Hoboken, NJ: John Wiley & Sons, Inc.

Rutledge, A. (2009, March 28). Proximity, Uniform Connectedness, and Good Continuation. Retrieved from

Tepper, D. (2012, April 3). How to improve performance in your Metro style app. Retrieved from Windows 8 app developer blog:


Responsive Captcha: A small PHP library for preventing spam

If you’re reading this, you probably already know what a CAPTCHA is. The most common form consists of an image with warped or obscured characters which must be entered into a text field. While these image-based CAPTCHAs tend to be effective at stopping spam, they are also poorly accessible, often slow, and require a third-party service or large font files. Surely there must be a better way.

There is. Text-based CAPTCHAs use simple logic questions to weed out bots while remaining accessible to users with disabilities. I found numerous text CAPTCHA implementations floating around the Web, but I was disappointed that they all either relied on a third-party service or required setting up a database. So I decided to make my own.

The result is Responsive Captcha, a PHP library which generates simple, random arithmetic and logic questions, and can be easily integrated into an existing form.

Some example questions generated by Responsive Captcha include:

  • Which is smallest: eight, sixty-nine, or seven?
  • What is nine minus five?
  • What is the third letter in rainbow?
  • What is eight multiplied by one?

For more examples and instructions for use, check out the project on GitHub: