On render “ghosts”

Thank you to everyone on the Opengeofiction server.

We had unexpected longer downtime last night and this morning.

The render server had some problem (not really clear what went wrong), and needed to be restored from backup (not part of the plan), and then the ogf-carto render was reset (which had been the plan).

Just want to say thank you to the community for your patience.

Those who have been struggling with “render ghosts” – I think they are all fixed now. You can see one really good example here:

picture

For comparison, the ogf-topo render still has the ghosts (yes, topo needs maintenance work, I know!):

picture

UPDATE: by the way, Happy mapping!

Music to fix servers by: Cake, “Long Line Of Cars.”

Advanced Conlanging…

“Conlanging” is the accepted name for the hobby of inventing new languages. I have been a conlanger since around age 7 – I remember inventing a language for my stuffed animals – specifically, a tribe of stuffed raccoons – at around that age, and though I don’t have my notes, I remember it being fairly sophisticated for something created at that age.

Like many conlangers, it’s only ever been a kind of side hobby, for me – though it dovetails nicely with another hobby I used to have as a child and that I resurrected in my post-cancer years and that has actually become a major avocation: geofiction (which is the whole point of the blog and website!). And of course, like many conlangers, I was, as a young adult, drawn to linguistics, where it eventually became one of my undergrad majors at the Univ of Minnesota. I don’t regret that, at all.

Anyway, this post is about conlanging, not geofiction. For some years now, there have existed some interesting websites and computer applications for inventing languages and storing the data. But I found a site yesterday that takes it to a new level. The site is called vulgarlang.com. I actually rather dislike the name, but I think it’s probably good marketing. Anyway, the site is created by people who clearly are quite knowledgeable on matters linguistic – to a level I’ve never seen before. I went ahead and paid the $25 “lifetime” access – we’ll see how that pans out, as I’ve seen many a website offering those terms that lasts 5-10 years before disappearing or radically altering its business model such that the guarantee doesn’t eventuate. But anyway, how could I resist. Let there be more conlanging, then – at a higher level of quality than ever before.

As an incidental, I haven’t posted much of my conlanging work online, at all, but a very incomplete exemplar can be found in this article about the Mahhalian language, which I created about 6 years ago originally.

On Mass Deletions in OGF

This morning, I posted this in response to a query on the User Diaries at OpenGeofiction:

General advice for all users:

When deleting large numbers of objects, please be careful. This is not a use-case that the OSM software is designed to handle (think about it – mass deletions are NOT common on OSM). Divide up your deletions to cover small numbers of objects (<1000) and small areas (so if something goes wrong you don’t mess up large areas).

I called attention to the post in the discord channel (“OGFC”), and so I decided to rant onward and provide some additional clarification and thoughts. I’ve distilled those, without much editing, below.

To the above, I would add that in my own practice, I have another step I take with respect to deletions. I almost never delete relations. You’ll notice in JOSM, it actually tries to warn you that “this is rarely necessary” when you try to delete a relation, and I think that’s a solid point. What you can do with relations you don’t need any more is repurpose them. That means you delete the members of the relation, and all of its tags, and then use it the next time you need a new relation for something.

user: What would be the potential harm of deleting relations?

Well I suspect that sometimes, a deleted relation can end up as a “ghost” object, or a “stranded” object, in the render database. Meaning it’s truly been deleted from the API database (the main rails port) but that deletion fails to transfer correctly to the render database (which stores things in a different format, and doesn’t actually have “relations” at all, but rather transforms a relation them into a special type of “closed way” I think)

By repurposing a relation instead of deleting it, you guarantee the render database will have to reclassify the linked object(s).

If you just delete it, the render database might not handle the disappeared relation correctly, and just leave the no-longer-linked objects lying around.

This is just speculation, because I don’t really understand it well. But just based on observed behavior

In general, I think that the OSM software is not designed to handle mass deletions. That’s the key point.

Because in OSM, who goes around deleting large number of objects? The real world doesn’t work that way.

You re-tag things, yes. You might move things slightly, adjusting position to improve accuracy. But things in OSM don’t “disappear”

Or if they do, they do so in very, very small sets. A building gets knocked down. A road gets torn out.

One object at a time.

So it seems very plausible to me that the OSM software actually hasn’t been designed to handle mass deletions, and hasn’t been tested for integrity in dealing with mass deletions.

For years now, I’ve been telling people, when you decide to rearrange your territory (and what geofictician doesn’t sometimes decide to rearrange their territory?!) … it’s better to move things than to delete/re-create things.

This prevents “ghosts” in the render.

In the long run, I suspect that we’ll have to just “reboot” the render now and then. But I’m not going to do it very often. (I mean “reboot” as in delete and re-create from zero, not “reboot” as in turn the machine off and on again – that happens every night).

I’d welcome comments on this rambling, vaguely rant-like discourse, for those who are knowledgeable about how the OSM apidb-to-renderdb pipeline works.

Music to make deletions to: 10cm, “오늘밤에.”

New server, same old city, but less old than before

Having worked furiously, all summer, in taking over the hosting for Opengeofiction, I have now finally reached a point where I feel like investing some time in the creative side – and actually work on the map again for a while.

I have added quite a bit of new area to the city of Ohunkagan. I’ve been saying that, within my “historical approach,” that I’ve brought it up to 1920, but I think there’s more I want to do before I call it officially caught up to 1920. So let’s call it 1917 or so. Also, some of the surrounding communities are still back before 1900 – e.g. Prairie Forge, Iyotanhaha, Riverton. They need to catch up, too. So here’s the new “work-in-progress” gif. picture

Here is the current snapshot.

picture [Technical note: screenshot taken at this URL (for future screenshots to match).] Here’s the wider area snapshot. picture

[Technical note: screenshot taken at this URL (for future screenshots to match).]

Music to map on a new server by: Lianne La Havas, “Forget.”

OGF Live

The OGFMirror became opengeofiction.net, over the weekend.

This is huge news.

So far the site is working okay. Not great, just okay. There are some issues.

  • The ActionController::InvalidAuthenticityToken is frequent and painful, for users.
  • Possibly related to the above – users keep getting forcibly logged out, over and and over.
  • The render is still playing catchup on the whole world map, and seems to be lagging around 2 hours for high zooms.
  • Incoming email is completely broken – possibly due to errors in the DNS tables at the opengeofiction.net registrar (and we’re waiting on the old host to fix this)
  • Outgoing email is problematic for some substantial portion of users due to over-aggressive anti-spam efforts by several major email providers, including Apple (icloud) and Microsoft (hotmail, outlook, live). I’m not even sure how to begin fixing this. I’ve implemented DKIM, but this also relies on fixing DNS errors which are not currently being fixed, and that might help. I’ve looked into a blacklisting of my email server by spamhaus.org and discovered it is due to my email server sharing the same IP range with a Nigerian Prince or somesuch in the server farm where it lives.

Work continues. Meanwhile, to all the users of OGF: “Happy mapping!”

Music to map happily to: Cake, “I Will Survive.”

OGFMirror

I’m super bad about posting to this blog. That’s partly because I feel a strong desire to report some actual, positive progress, which I haven’t felt enabled to do.

I have been very busy with HRATE technicalities. I am building – very, very slowly – a “mirror” for opengeofiction.net (OGF). I think if this is successful, then the owner of that site, who has expressed interest in “letting go” of having to continue to maintain it, will allow the mirror to take over for the site and a transition to a new hosting environment will be complete.

Someday, I intend to write up, in elaborate, technical detail, this process of setting up a mirror. But in broad outlines, here is what it involves (has involved, will involve).

  • Build a new Ubuntu 20.04 LTS server. This leads to lots of incompatibilities farther down the line, because the existing OGF server is an older version. Install the basics – apache, postgresql, etc.
  • Install an OSM rails port on the server.
  • Migrate the OGF data to this server. This was very, very hard – because the OGF data (in either .osm.pbf format, or in pg_dump format) proved to contain inconsistencies (data corruption). Some missing current nodes and ways had to be restored manually (text-editing .osm = .xml files). This ended up a 2-weeks-long process.
  • Set up incoming replication from the source apidb (OGF) to the new mirror (currently being called ogfdev).
  • Set up outgoing replication for the new ogfdev instance (to drive render, overpass, etc)
  • Set up a new primary render. This had some sub-parts.
    • coastlines. This proved very difficult, because as far as I can figure out, the osmcoastline tool used to create the coastline shapefiles is broken on Ubuntu 20.04. An older version must be used. My current workaround: I’m actually running coastlines on an older server. I import a coastline-containing pbf file to the older server, run the osmcoastline tool, and post the shapefiles for consumption on the render server.
    • I made a decision to run the renders on a different server than the apidb. I think this might involve a bit more expense, short term, but it makes the whole set of processes more scalable, long term. My experience with Arhet is that the render requires scaling sooner / more frequently than the apidb, as the user base grows. Installing the render software (mod_tile and “renderd”) proved difficult. It turns out that there are some lacunae and downright incorrect steps in the documented installation sequences on github.
    • Set up incoming replication from the ogfdev database to the render database.
    • There are substantial differences in recent versions of the openstreetmap-carto style – specifically, the special shapefiles are no longer stored in as datafiles in data folder in the render directory. Instead, the shapefiles are loaded to the database. Because non-standard shapefiles are used, this means rewriting the load procedures (python scripts) – the standard approach is to just grab the files for “Earth” (because who would run osm for some other planet?!). So that file-grabbing is hard-coded in the procedure.
  • Set up a new topo render. The topo render was shut down on OGF, so this will be the only working version. Unfortunately, I ran into a similar problem with some of the topo pre-processing as I ran into with osmcoastline, above. I suspect for the same reason – something in one of the dependencies they both have. So the topo pre-processing (turning the .hgt files into a contour database) is also being run on a separate, Ubuntu 18.04 server (just like the coastlines).
  • Set up appropriate changes and customizations for the front-facing rails port (osm website). This involves importing user data (done) but also user diaries (not done). These require ad hoc SQL coding that give me flashbacks to my job as DBA in the 2000’s. Another unfinished piece – internationalization. The current ogfmirror website looks okay, but only in English. Switch to another language, and it all reverts to OSM boilerplate. Why is internationalization done so badly on production software of this kind? I see no easy solution except manually editing each language’s .yml file in turn (OSM has a 100+ languages). Or building my own damn application to achieve that result.
  • Set up overpass and overpass-turbo. Overpass installs relatively painlessly, but I’m having trouble getting incoming replication to work correctly. overpass-turbo was quite difficult – the current version on github is flat-out broken, and so an older version (commit) must be compiled and installed. Further, the compilation and configuration process overwrites some of the parameters files, so the parameters files have to be modified after running the first steps of configuration, but before the last part. This is the step I am on right now.
  • Set up nominatim? – nice to have, but not urgent. Anyway nominatim doesn’t work on the existing OGF website
  • Implement some of the custom tools that are available on the OGF website: the “scale helper,” the “coastline helper,”…
  • What else? This is a work in progress…

So I’ve been busy. Here is a link to the site. Bear in mind, if you are reading this in the future, the link may not show you what I’m currently writing about, but rather some future iteration of it.

https://ogfmirror.com

I’m still working on some of those last steps. Open to hearing what else needs to be done.

Music to hack to: K-os, “Hallelujah.”

Font Fail

As usual, I have neglected this blog. So what’s been going on?

I have taken some steps to migrate one of my major geofictions – The Ardisphere – from OGF to my self-hosted OGFish clone, Arhet. The reason for this is that OGF seems increasingly rudderless and destined to eventually crash and burn, and I am emulating the proverbial rat on the sinking ship. I still hugely value the community there. But the backups have become unreliable, the topo layer (of which I was one of the main and most expert users) has been indefinitely disabled, and conceptual space for innovation remains unavailable.

One small problem that I’ve run up against in migrating The Ardisphere to Arhet is that I discovered that Korean characters were not being supported correctly by the main Arhet map render, called arhet-carto. This is a problem because the Ardisphere is a multilingual polity, and Korean (dubbed Gohangukian) is one of the major languages in use, second only to the country’s lingua-franca, Spanish (dubbed Castellanese). I spent nearly two days trying to repair this Korean font problem. I think I have been successful. I had to manually re-install the Google noto set of fonts – noto is notorious (get it?) for being the most exhaustive font collection freely available. I don’t get why the original install failed to get everything – I suspect it’s an Ubuntu (linux) package maintenance problem, rather than anything directly related to the render engine (called renderd, and discussed in other, long-ago entries on this sparsely-edited blog).

Here (below) are before-and-after screenshot details of a specific city name that showed the problem: Villa Constitución (헌법시) is the capital and largest city in The Ardisphere. Ignore the weird border-artifacts behind the name on these map fragments – the city is in limbo, right now, as I was re-creating it and it got stuck in an unfinished state.

Before – you can see the Korean (hangul) is “scattered”:

picture

After – now the hangul is properly-composited:

picture

You can see The Ardisphere on Arhet here – and note that within the Arhet webpage you can switch layers to OGF and see it there too. Same country, different planets!

Music to fiddle fonts by: Attack Attack! “Brachyura Bombshell”

The Terrible mysql Crash of 2021

I still don’t know how it happened. I somewhat suspect I got hacked, somehow … I found strange and unexpected Chinese IP addresses in my mysql error log. But I don’t understand mysql back end or admin well enough to know for sure what was going on.

I was able to restore a full-server backup to a new server instance, and have re-enabled the mysql-driven websites (my 2 blogs, my wiki, etc.) on the new instance. Meanwhile, I somewhat stupidly reactivated the non-mysql website (the geofictician OSM-style mapping site, the so-called “rails port”) on the old server instance. The consequence of that is that I am now stuck with a two-server configuration where I had a single server configuration before. I think in the long run I’ll want to isolate ALL my mysql-based sites to a single server, and ALL my non-mysql-based sites to another single server. That’s going to take a lot of shuffling things around, which is not trivial.

For now this blog (and my other blog) seems healthy and up-and-running, again.

There may be more downtime ahead as I try to reconfigure things more logically, however.

Music to do sysadmin drudgery by: Talking Heads, “Found A Job.”

Blocks and more blocks

I keep on with my many small steps approach to Ohunkagan. I feel it becoming mythical in my mind and memory. I imagine the corners where scenes take place in unwritten novels, the streetcars characters ride to well-mapped destinations.
I’ve updated the “work-in-progress” gif.

picture

 

Here is the current snapshot, let’s call it 1907.

picture

[Technical note: screenshot taken at this URL (for future screenshots to match).]

Here’s the wider area snapshot.

picture

[Technical note: screenshot taken at this URL (for future screenshots to match).]

Music to map by: Arvo Pärt, “Tabula rasa.”