Wednesday, May 15, 2013

Glocal Launches Marketplace in Beta to Help Small Businesses Extend Reach to Local Community and Beyond

Glocal, the hyperlocal lifestyle hub, today announced the beta launch of the Glocal Marketplace, a segment of the site designed to connect users with the hard-to-find products and services unique to their communities. The Glocal Marketplace also provides small businesses with an opportunity to extend their reach to people passionate about supporting local business.

Accessible via the Local Goods tab on the home page, the Glocal Marketplace features user- and business-submitted items that span several categories, from native foods and fashion to home d├ęcor and housing. People searching for Seattle seafood can find a link to purchase Dungeness crab legs, while a Detroit Lions football fan can identify the best place to buy a branded bowling ball.

"Locals are passionate about the goods and services that make their town distinctive," said Lincoln Cavalieri, founder and president of Glocal. "The Glocal Marketplace provides a venue to tout those items while driving attention and business to the great local companies that create them."

Any user can post items to the Marketplace via a simple interface, which provides prompts for item description, price and a link to an e-commerce page. Users also tag the product or service to one of Glocal's 147 communities, ensuring that it is featured alongside other local goods and content.

For small businesses, the Glocal Marketplace presents a way to leverage the local community while simultaneously profiling signature items on a broader scale.

"The great benefit of the Marketplace is that a small winery in Napa Valley that posts its latest vintage can easily reach a wine connoisseur in London who's browsing the site," said Cavalieri. "Making those connections will give small businesses a new platform to amplify their profile locally and extend their reach to potential new customers."

Posting items to the Glocal Marketplace is free. Small businesses interested in profiling their goods can visit the registration page to get started.

For more information on Glocal and the beta launch of the Glocal Marketplace, visit

About Glocal, Inc.
Founded in 2010, Glocal Inc. is an Internet company operating, a website that gives people from around the world the power to share local content and buy and sell unique local goods via the Glocal Marketplace. For more information, visit

Elaine Green
Airfoil for Glocal, Inc.

Tuesday, March 5, 2013

PR Web (Press Release) March 06, 2013 Launches Social Sharing Platform 

Discover and share stories from the world's communities

Online social platform announced the recent release of its new social sharing web application. Glocal is a new way to share journalism online through articles, videos and photos submitted by users in 113 local markets around the globe. 

Glocal lets users share their favorite videos, pictures, and articles discovered online with their local community, friends, and family. Not only does the platform display user and news coverage from the nearest city, but also displays popular content from cities abroad. In addition to providing a world-class social experience, the site features a "dateline" button and “post it” bookmarklet. The dateline is a button for journalists to put on their website or blog, taking a new approach to declaring a story, photo, or video’s proximity. When selected, the story is shared on the users page and to the predefined city.

A robust subscription feature is also offered on Glocal, where users can subscribe to popular news providers like CNN, shows like CONAN, news enthusiasts, bloggers, world travelers and their friends.

Glocal's founder and CEO Lincoln Cavalieri says the "Glocal mission is to provide stories to the world's communities from the people who represent them. It's a new approach to local content curation done on a global scale.” 

The Glocal platform can be viewed at

The dateline button can be found at

About Glocal, Inc. Glocal Inc. is a startup Internet company operating a website that lets people from around the world upload and embed local content. The company was founded in 2010. Glocal Inc. will be at SXSW Trade Show March 10-13, 2013 in Austin, TX, booth number 1570. For more information, see Copy the Glocal dateline button here:

Tuesday, December 4, 2012

Tuning Your Own Reddit-style Ranking Algorithm

An important feature of web applications like Glocal and other content aggregators is ordering content so the best articles are at the top of the page. Algorithms to accomplish this don't require math any more complex than what you learned in high-school algebra. To demonstrate, I will walk through the basics of "hotness" ranking and demonstrate one implementation using SQL queries.

First, imagine you have the following information about your website's content in the article table in a database.

Title                 Likes Submitted
The Mangiest Cat          5 2012-12-01 19:48:45
My Political Rant         0 2012-12-04 06:06:12
The Dark Knight Rises   146 2012-08-09 23:16:23

Your site will probably have a lot more articles than this but we'll stick with these three to keep it simple. If you aren't familiar with databases, you can think of a table exactly like a spreadsheet page with named columns of data. Databases use a simple language called SQL to retrieve and manipulate their data.

In an SQL SELECT statement, we can use the ORDER BY clause like so...

SELECT Title, Likes, Submitted
FROM article
ORDER BY Submitted DESC retrieve the data ordered by descending submission time (most recent on top to oldest on bottom.)

Title                 Likes Submitted
My Political Rant         0 2012-12-04 06:06:12
The Mangiest Cat          5 2012-12-01 19:48:45
The Dark Knight Rises   146 2012-08-09 23:16:23

This is a useful way to order content so your visitors will be shown the newest articles first. Unfortunately the most recent one is the political rant. The article's Likes count of zero implies, not surprisingly, that people are more interested in looking at a picture of a mangy cat.

We could instead retrieve our data in order of how many Likes each article has earned.

SELECT Title, Likes, Submitted
FROM article

Which will return the most Liked first to the least liked last:

Title                 Likes Submitted

The Dark Knight Rises   146 2012-08-09 23:16:23
The Mangiest Cat          5 2012-12-01 19:48:45
My Political Rant         0 2012-12-04 06:06:12

A review of the movie The Dark Knight Rises sounds great, but at nearly 4 months old, it is starting to grow mold. To order by what is "hot", we need to order our articles in a way that takes both the number of Likes and time Submitted into account. This is going to require some math. Do you remember when you were a snotty little kid and asked your math teacher when you would need to know math for real life? You owe her an apology.

First, let's convert the submitted date to an actual number so we can perform some simple mathematical operations on it (you know, like addition.) Unix time is the number of seconds since January 1st, 1970. This will give us a value of the submission date as a simple integer. MySQL (the database I am using on my current project) has a built-in conversion function.

SELECT Title, Likes, UNIX_TIMESTAMP(Submitted)
FROM article

Will give us this:

Title                 Likes UNIX_TIMESTAMP(Submitted)
The Mangiest Cat          5 1354409325
My Political Rant         0 1354619172
The Dark Knight Rises   146 1344568583

Now that we have the data we need as basic numbers, the easiest way to combine them is to add the number of Likes to the submitted Unix time. Let's see how well that works.

SELECT Title, Likes + UNIX_TIMESTAMP(Submitted) AS Hotness
FROM article

This will return our data in this order:

Title                   Hotness
My Political Rant 1354619172
The Mangiest Cat        1354409330
The Dark Knight Rises 1344568729

This returns our articles in the exact same order as by date only. It should not be surprising because adding a measly 5 votes to a number that is over a billion does not make much of a difference. We will need to magnify the votes value to allow it to compete with the size of the time stamp.

Let's start by deciding that we want a single vote to affect our hotness ranking as much as if the article were submitted an entire day more recently. A day is 86,400 seconds long (24 hours * 60 minutes * 60 seconds) so let's multiply Likes by that.

SELECT Title, Likes * 86400 + UNIX_TIMESTAMP(Submitted) AS Hotness
FROM article

Gives us this new data order:

Title                   Hotness
The Dark Knight Rises 1357182983
The Mangiest Cat        1354841325
My Political Rant 1354619172

This moved our cat picture above the political rant, but the article from back in August is still on top. We have a new problem: how do we give a small number of Likes a boost without allowing articles with a very high number to dominate the top of the list long past the time they they are relavant? Fortunately there are some mathematical operators to do just this.

Logarithms are one such operator. Log base 10 is roughly the same as counting the number of digits in a decimal number, minus 1. For example:

Number      log10
1           0
10          1
100         2
1,000       3
1,000,000   6
0           No! You can't do this.
146         2.16435285578444

I don't know what 2.16435285578444 digits looks like, but hey, it's math. Looking at the above chart, you can see that the log10 of 10 votes results in the number 1. Doubling the result to 2 requires 10 times as many votes. That's just the effect we were looking for. Lets throw this into our SQL query:

SELECT Title, LOG10(Likes) * 86400 + UNIX_TIMESTAMP(Submitted) AS Hotness
FROM article

This blows up because you are not allowed to take a logarithm of 0 and nobody liked the political rant. Let's add 1 to every Like count to protect from this and run it again.

SELECT Title, LOG10(Likes + 1) * 86400 + UNIX_TIMESTAMP(Submitted) AS Hotness
FROM article

Title                   Hotness
My Political Rant 1354619172
The Mangiest Cat        1354476557.268033
The Dark Knight Rises 1344755839.2177222

That didn't work at all! Wait, because we are taking the log10 of the Likes count, one vote does not equal 1 day anymore. What does it equal now?

LOG10(1 + 1) = .301029995663981

Since any (non-zero) number divided by itself always equals one, let's make 1 vote equal 1 again by dividing it by .301029995663981.

SELECT Title, LOG10(Likes + 1) * 86400 / .301029995663981 + UNIX_TIMESTAMP(Submitted) AS Hotness
FROM article

Title                   Hotness
The Mangiest Cat        1354632666.0811288
My Political Rant 1354619172
The Dark Knight Rises 1345190635.5848327

Oh hell yeah. That is exactly the result I was looking for - recent and popular content is on top. We can combine and round the two figures that modify the vote number to make the SQL statement a little simpler.

SELECT Title, LOG10(Likes + 1) * 287015 + UNIX_TIMESTAMP(Submitted) AS Hotness
FROM article

One vote equaling one day is a pretty strong effect (Reddit has 1 vote equal only 12.5 hours) so you may want to adjust the vote multiplier to order your content in a way that works best for your content and community.

Thursday, October 18, 2012

Reddit Ranking Algorithm on the Cheap with SQL Only

Ordering website content by "hotness" (a combination of user voting and freshness) like Reddit is a great tool for empowering your community to curate site content. There are several articles floating around discussing the Reddit algorithm in general terms but they don't offer many suggestions on how to implement the infernal thing. While exploring popularity ranking for videos on, I learned it can be easy to get hotness ranking working in your own Web app. As long as you have data for total votes and submission date you can implement a version using only the SQL ORDER BY clause.

Glocal currently uses MySQL but the SQL syntax should be adaptable to other databases. Here is a simple query to get data for the 50 newest videos.

SELECT id, title
FROM videos

Ordering by recent is simple and fast because id is the primary index of the table. Ordering by hotness is a little crazier.

Reddit Hotness in My Own Words

The basic gist of the Reddit hotness score is adding a number that represents the community's votes to a number that represents the newness of the content. The vote portion is calculated like so:
vote_score = LOG10(| vote_total |) * sign_of_votes
vote_total is the upvotes minus the downvotes. If up and down are equal, make the difference equal to one instead of zero. (This must be why Reddit is so gallant about giving you a free upvote - one vote gives exactly the same ranking as none.) To get the vote score, take the absolute value and calculate log base 10.

sign_of_votes is needed in case some content inspired more downvotes than upvotes. Because we took the absolute value of the difference before taking the log, we are treating negatives and positives the same way. Multiplying by negative one if the vote difference was negative makes sure that a result of -10 does not rank the same as a +10.
time_score = (time_submitted - magic_number_1) / magic_number_2
The time_score a flat value representing the time content was submitted. It has to be tweaked with magic numbers to play nice with the vote_score. Reddit uses 1,134,028,003 as magic_number_1 and 4,500 as the value of magic_number_2. Like all magic numbers they were probably generated by throwing random values at it until it returned the desired results. If you are doing this and want to sound impressive to your boss you can say you are "tuning the algorithm."

SQL Implementation

Here is my approximation of the above formula using an SQL ORDER BY clause:

SELECT id, title
FROM videos
    LOG10(ABS(cached_votes_total) + 1) * SIGN(cached_votes_total)  
    + (UNIX_TIMESTAMP(created_at) / 300000) DESC


The two values from the videos table that I am using are cached_votes_total and created_atcached_votes_total is updated in the videos table every time a new vote is cast. This keeps us from having to join to another table in what is already a pretty hefty query with all that math in it.

I was unable to find a simple way to make the vote total equal one if it was zero, so I instead added +1 to the absolute value. This prevents any attempt to take the log of zero (which could tear the very fabric of reality) and did not appear to change the results in a significant way.

I did struggle getting the created_at date to return a reasonable score. Using the UNIX_TIMESTAMP MySQL function helped, but whatever it was returning was not the same as the python version. After some "tuning" I ditched the first magic number and just divided by 300,000. You will probably have to create your own magic numbers depending on how dates are handled in your application.

Performance and Optimization

This is not the fastest query, as you can imagine, but it runs well enough in's current incarnation  A quick test showed 830ms on over 140,000 rows of data. If we only check the most recent 5,000 videos by adding this clause:

WHERE id > (SELECT COUNT(*) - 5000 FROM videos)

It reduced the the query time by about 1/4 to 211ms.

Unfortunately this does not scale well because as more content is added the time required to run the query will continue to increase. The most practical solution going forward is to add a column to the videos table to store the ranking value calculated by an external process. Because Reddit hotness can be calculated without caring about the current date and time, ranking values only change when vote totals change. As I mentioned above, we are already updating the cached_votes_total column when new votes are cast, so it would probably not be a significant additional burden to update the ranking value at the same time.

Query Provided As-Is; Your Results May Vary

This is not the fanciest implementation, but even without optimizing it should be useful for small sites and prototyping your own hotness ranking feature.