Here's the idea, as roughly inspired by this blog post. A site places a geolocation.xml file in its root directory or a subdirectory. XML data embedded in a page (probably better). This file will contain lat/longs, probably defining an area (center/radius, corners of a polygon, whatever).
A webcrawler would troll for these files, index websites by them. Client apps would reside on devices like phones, GPS devices, and the like. They'd check the device's location, and would tie that location to "nearby" websites.
There are issues. Spam management. Actually handling the indexing, overhead, optimizing. Client app design, getting buy-in on the geoloc files. Details, at this point. The sketch of the idea is there, and it's clearly a reachable goal. It may not be reachable on my own, but it's reachable.