Advanced Conlanging…

“Conlanging” is the accepted name for the hobby of inventing new languages. I have been a conlanger since around age 7 – I remember inventing a language for my stuffed animals – specifically, a tribe of stuffed raccoons – at around that age, and though I don’t have my notes, I remember it being fairly sophisticated for something created at that age.

Like many conlangers, it’s only ever been a kind of side hobby, for me – though it dovetails nicely with another hobby I used to have as a child and that I resurrected in my post-cancer years and that has actually become a major avocation: geofiction (which is the whole point of the blog and website!). And of course, like many conlangers, I was, as a young adult, drawn to linguistics, where it eventually became one of my undergrad majors at the Univ of Minnesota. I don’t regret that, at all.

Anyway, this post is about conlanging, not geofiction. For some years now, there have existed some interesting websites and computer applications for inventing languages and storing the data. But I found a site yesterday that takes it to a new level. The site is called vulgarlang.com. I actually rather dislike the name, but I think it’s probably good marketing. Anyway, the site is created by people who clearly are quite knowledgeable on matters linguistic – to a level I’ve never seen before. I went ahead and paid the $25 “lifetime” access – we’ll see how that pans out, as I’ve seen many a website offering those terms that lasts 5-10 years before disappearing or radically altering its business model such that the guarantee doesn’t eventuate. But anyway, how could I resist. Let there be more conlanging, then – at a higher level of quality than ever before.

As an incidental, I haven’t posted much of my conlanging work online, at all, but a very incomplete exemplar can be found in this article about the Mahhalian language, which I created about 6 years ago originally.

On Mass Deletions in OGF

This morning, I posted this in response to a query on the User Diaries at OpenGeofiction:

General advice for all users:

When deleting large numbers of objects, please be careful. This is not a use-case that the OSM software is designed to handle (think about it – mass deletions are NOT common on OSM). Divide up your deletions to cover small numbers of objects (<1000) and small areas (so if something goes wrong you don’t mess up large areas).

I called attention to the post in the discord channel (“OGFC”), and so I decided to rant onward and provide some additional clarification and thoughts. I’ve distilled those, without much editing, below.

To the above, I would add that in my own practice, I have another step I take with respect to deletions. I almost never delete relations. You’ll notice in JOSM, it actually tries to warn you that “this is rarely necessary” when you try to delete a relation, and I think that’s a solid point. What you can do with relations you don’t need any more is repurpose them. That means you delete the members of the relation, and all of its tags, and then use it the next time you need a new relation for something.

user: What would be the potential harm of deleting relations?

Well I suspect that sometimes, a deleted relation can end up as a “ghost” object, or a “stranded” object, in the render database. Meaning it’s truly been deleted from the API database (the main rails port) but that deletion fails to transfer correctly to the render database (which stores things in a different format, and doesn’t actually have “relations” at all, but rather transforms a relation them into a special type of “closed way” I think)

By repurposing a relation instead of deleting it, you guarantee the render database will have to reclassify the linked object(s).

If you just delete it, the render database might not handle the disappeared relation correctly, and just leave the no-longer-linked objects lying around.

This is just speculation, because I don’t really understand it well. But just based on observed behavior

In general, I think that the OSM software is not designed to handle mass deletions. That’s the key point.

Because in OSM, who goes around deleting large number of objects? The real world doesn’t work that way.

You re-tag things, yes. You might move things slightly, adjusting position to improve accuracy. But things in OSM don’t “disappear”

Or if they do, they do so in very, very small sets. A building gets knocked down. A road gets torn out.

One object at a time.

So it seems very plausible to me that the OSM software actually hasn’t been designed to handle mass deletions, and hasn’t been tested for integrity in dealing with mass deletions.

For years now, I’ve been telling people, when you decide to rearrange your territory (and what geofictician doesn’t sometimes decide to rearrange their territory?!) … it’s better to move things than to delete/re-create things.

This prevents “ghosts” in the render.

In the long run, I suspect that we’ll have to just “reboot” the render now and then. But I’m not going to do it very often. (I mean “reboot” as in delete and re-create from zero, not “reboot” as in turn the machine off and on again – that happens every night).

I’d welcome comments on this rambling, vaguely rant-like discourse, for those who are knowledgeable about how the OSM apidb-to-renderdb pipeline works.

Music to make deletions to: 10cm, “오늘밤에.”