Archive for the ‘Unicode’ Category

Va-Chede, a Dying Script

24 March 2011

Bassa (bsq) is a language spoken in Liberia and Sierra Leone by a slightly more than 400,000 people.

In “A Dying Liberian Language – The Bassa Va-chede” on the 1847 Post, writer Pianapue Early bemoans the disappearance of the Bassa writing system, an alphabetic system also known as Vah or Va-Chede. The Va-Chede is being replaced by a system based on the Latin alphabet.

According to that article and History of the Bassa Script, tradition holds that the Ve-Chede was invented by a man named Dee-Wahdayn who would evidently use his teeth to make imprints on leaves—”va” means spit or throw, referring to the action of Dee-Wahdayn “throwing” the words out of his mouth. Of interest, the article cites Abba Karnga in the out-of-print “My People, the Bassa Tribe” as saying this script was in use when Hanibal visited the area around 520 BCE. During the slave trade era, Bassas would use the Va-Chede to avoid capture.

Most other Internet sources, however, do not give credence to this traditional explanation. A more common explanation is that Va-Chede was invented perhaps in the 1830s by the missionary William Crocker or in the early twentieth century by Dr. Thomas Flo Narvin Lewis. In “A Brief Summary of Liberian Indigenous Scripts,” Tim Slager provides a good summary of the history of the Bassa and other scripts in Liberia. See also the Script Encoding Initiative at the University of California, Berkeley for a summary of Va-Chede.

The forms of the Bassa letters are interesting. Examples are provided on “Bassa Alphabet” and pages 38 to 40 of the Blackwell Encyclopedia of Writing Systems (Amazon). As of November 2010, Bassa is supported by Unicode, though it does not seem to be available for use yet. XenoType Technologies offers a Bassa language kit for USD 19.

Ve-Chede accounts for tones in the language with dots as shown on “Bassa Language,” the only Liberian script to do so. Va-Chede is evidently the only alphabetic script developed in Africa. An alphabet is a writing system where each sound is represented by one letter. This is opposed to a syllabary like hiragana or Cherokee where entire syllables (or moras) are included in a single symbol, and systems like Chinese characters which are more complex yet.

Advertisements

Carlinga for Typing Diacritics

17 March 2011

Accents, circumflexes, cedillas and umlauts. Four types of diacritic marks commonly used in European and other languages. But English rarely uses any. Often it will retain diacritics when first borrowing a word, then gradually lose them. “Depot” is rarely written anymore as depôt (or dépôt) and coöperation has become cooperation.

This lack of need for diacritics meant that in the past, when computers had more limitations in memory and processing power, the English-speaking people who developed software did not include diacritics. For people needing diacritics, this created a problem that has never been completely resolved.

For North American languages, the Language Geek provides an excellent set of fonts and keyboard layouts to assist in typing—at no charge.

Carlinga is another excellent resource. Also free of charge, Carlinga works in the background waiting for you to type a pre-programmed key sequence, then it silently jumps in and replaces the sequence with the programmed equivalent.

For example, type ,\e and Carlinga will convert it into è. Type ,/h and you get an ħ. Or ,/l to get a ł. Generally, it does not matter what software you are using, though some software programs may not support the characters (in which case you are out of luck for that software).

Another nice feature of Carlinga is that it can be modified in case your character is not pre-programmed.

Carlinga comes with a PDF file showing all the pre-programmed characters, but if you need to find a character not in the list, see List of Unicode characters. If you have Word for Windows, you can also find characters through the insert symbol feature. On Word 2007/2010, it is Insert > Symbol > More symbols. Unicode fonts with lots of characters to look for include Arial Unicode MS and Lucinda Sans Unicode.

Carlinga requires no installation or uninstallation.

For fonts supporting a wide range of characters, see the Language Geek and Unicode fonts, one of which is the pleasant-looking Doulos SIL font.

Internet Explorer Update

14 November 2006

Microsoft has officially released the 7.0 version of Internet Explorer. As reported earlier on this blog, the beta version did not fully implement Unicode, so that some characters used in languages such as dxʷlešúcid (Lushootseed) are not displayed correctly. Although I reported this to Microsoft, the problem has not been addressed. (I have reported the problem again.) It appears to be a fault of the font that Explorer uses, so it’s possible that a work-around might be available.

What is ironic about this is that dxʷlešúcid is the language home to Redmond, the headquarters of Microsoft.

Both Firefox and Netscape display dxʷlešúcid correctly, though the glottal stop is easier to read with the Firefox browser.

Microsoft has Quechua but Still Lacks Some Unicode

24 August 2006

The Associated Press announced the Bolivian launch of Quechuan software by Microsoft today. The article notes that the word used for file is “quipu,” “borrowing the name of an ancient Incan practice of recording information in an intricate system of knotted strings.” Both Microsoft Windows and Office offer Quechua. Other languages supported include several varieties of Sami as well as Welsh, Māori and Xhosa.

Microsoft also released its nearly completed version of Internet Explorer 7, named Release Candidate 1. Unfortunately, it still doesn’t fully implement Unicode as can be seen by trying to read the June posts of this blog.

IE and Office

14 July 2006

Microsoft has released a beta (test) version of its next update to its browser, Internet Explorer, in conjunction with the upcoming release of Vista, the next Windows version. The current version of IE is 6; the beta is IE 7 and is now in its third release. Although it is better at displaying Unicode than version 6, it still cannot correctly display all the blog entries in Living Languages in June 2006. (Click here to download.)

I have submitted an error report, so hopefully Microsoft will add complete Unicode compatibility. (more…)

Unicode and UnicodeInput Utility

6 July 2006

When computing started out, memory was limited, so not many letters were provided. Different ways of representing characters arose, making it difficult to exchange documents.

The Unicode Consortium provides a standard way of encoding characters, and has the ambitious task of assigning a unique number to every character in human languages.  If you go to the Code Charts page and open a chart, you’ll see each letter has a four-digit code below it. That’s the Unicode hexadecimal code for that character. (Hexadecimal is a special way of counting.)
(more…)

Unicode-Enabled Fonts

18 June 2006

How can you write in a langauge so everyone can read it without needing a special font (character set)? Write in Unicode, a standard designed to include all languages. Not all software is compatible with Unicode and there are other issues as well, but here are some fonts available with Windows or free on the Internet. (more…)