Minor Fender Bender, 1/5/2012

Poor Fats had a minor fender bender this past Thursday morning. Nothing too serious, certainly not as serious as the orbital fracture and broken nose he had back in 2010. But still enough to warrant a visit to the body repair shop this coming Monday.

Google Map of the location of the accident.

I was heading north on North Lynn Street and made a left to get onto the George Washington Memorial Expressway when the van in front of me stopped suddenly, right at the intersection of the on-ramp with North Fort Myers Drive, thus preventing me from proceeding onto the expressway. A young woman walking her dog was at the corner, and I assumed that the driver had stopped to allow the pedestrian to cross even though it was his right of way. The van backed up some and stopped, and I was wondering what was going on; was the driver trying to be REALLY polite and give the pedestrian more room to cross the road? Meanwhile, I’m stuck behind the van, and I’m worrying that when the traffic light changes I’ll be blocking the traffic going south on North Fort Myers Drive.

Then the van backed up and hit me. I immediately got out of Fats with my gas mileage notebook and a pen. The driver (an older Asian man) and his three passengers (young women, all) get out of the van; I assume from their looks that they’re Korean. The pedestrian said something to me like “What a nice way to start your morning, huh?” and apologized even though it wasn’t her fault, saying that she should have waved the van on more insistently. I told her that I saw the accident about to happen but was not able to back up out of the way in time. The pedestrian volunteers her information as a witness, and writes her contact information in my notebook; I give her my own, tearing a sheet out of the notebook. Similarly, the front seat passenger gives me her contact information, and I give her mine. I notice that her last name sounds Chinese rather than Korean. I ask the pedestrian whether her dog is a Shih tzu, and she says yes.

We then went our separate ways. The two passengers from the rear seat removed luggage from the back of the van and started walking south on North Fort Myers drive, perhaps to the Key Bridge Marriott. The van proceeded onto the GW Memorial Expressway, and I follow.

Later that day, I got a call from the front seat passenger’s insurance company, who will cover my damages. We arranged for me to meet their adjuster at an automobile repair shop this coming Monday morning. I’ll pick up a car from a nearby rental company, and will drive the rental car while Fats is being repaired. Also that same day, I went out to lunch with my co-worker and lunch buddy, A.B., and told him about the car accident. He noted with some amusement that I had come away from the accident with the contact information of two young women.

In thinking it over, I now realize that the driver wasn’t trying to be overly chivalrous to the woman walking her dog. Instead, I think that he was driving the two passengers in the rear seat to the Key Bridge Marriott, and was heading north on North Lynn Street. But instead of making a U-turn just before the Key Bridge to get to the Marriott, he went straight to the ramp leading to the GW Memorial Expressway. His passengers must have said, “No! No! Don’t go onto the Expressway! Turn left to go to the Marriott!”, causing him to stop. But unfortunately it was too late, he was too far onto the ramp, and I was behind him, preventing him from backing up and making the U-turn that he should have made. And I think that he knew I was behind him, but backed up into me anyway out of sheer frustration and anger!

Roti Mediterranean Grill Now Open in Rosslyn

I was out on a walk earlier today and noticed that the Roti restaurant on the corner of Wilson Boulevard and North Oak Street is now open. So I decide to give it a try.

You’re encouraged to design your own meal, and I was going to order a chicken sandwich with the hot S’hug (zhug) sauce and Spanish eggplant. But I noticed that the Sultan Sandwich came close (except it didn’t have the egg plant), so I ordered that. I also tried the Lemon Chicken Soup (which I think is avgolemono). For a drink, I had a bottle of Pomegranate Blue. It was a nice meal, especially the soup, and I am sure I’ll be back. A fine addition to the local restaurant scene in Rosslyn.

I asked them when they had opened, and it turns out that they opened just yesterday. I think next time I’ll try the Spanish eggplant.

Hublot painstakingly recreates a mysterious, 2,100-year-old clockwork relic…

I WANT.

Published in: on 18 November 2011 at 9:38 pm  Leave a Comment  
Tags: ,

Monster Slipper

Customer Tom Boddingham received this size 1,450 monster foot after a ‘clerical error’.

Makes me wonder how it got paid for. If I were to see such a thing in a department store, perhaps as a novelty sleeping bag, I wouldn’t be surprised to see it costing several hundred dollars. But surely Mr. Boddingham wouldn’t agree to pay several hundred dollars for a slipper. (Or maybe he would, if it were a customized orthopedic job?) Maybe he’d agree to pay around, say, the equivalent of fifty dollars U.S.

But I would think that the factory in Hong Kong wouldn’t agree to making something worth several hundred dollars and be paid only fifty dollars.

So, how did this slipper get made?

And, can I have one too?
🙂


Someone should alert the Bigfoot Field Researchers Organization.


Million-dollar typos cause worldwide losses


We mustn’t fail to mention Fats’ take on the matter.

There were four of us: me, your big feet, and YOU.

(Previous post in which I referenced this Waller performance.)


OK, this is stretching this thought chain to near breaking, but what about the Salish Sea human foot discoveries, in British Columbia? Some of those feet have been identified.

Haunted Laundry Machine?

I had added this as a mere addendum to the earlier post about The Mangler (one of my most favorite short stories by Stephen King), but it was so striking that I must give it its own post.

Video: Five-Year-Old Rescued From Washing Machine

Quote from the video:

Surprisingly, that machine was out of order. “And I haven’t been able to run it since, and I haven’t been able to run it just before.”

Man, that’s spooky. Brrrrrrrrh.


Now all we need is for them to release grainy black-and-white surveillance footage from a CCTV (no audio) showing the child apparently talking to some… thing… inside the laundry machine, and then climbing into the machine, seemingly hypnotized. Grotesque arms or tentacles reach out of the machine to help the child in. (The video said that the machine was high enough that the child would have had difficulty climbing into it. At least, unassisted.)

Then the door closes by itself. Furtively, without slamming.

Then the machine turns itself on.

There are adults in the background, but their backs are turned or they’re busy measuring detergent or folding laundry. So nobody notices anything amiss. The sound of a washing machine starting up wouldn’t have been out of the ordinary in a laundromat.

When interviewed later, in the emergency room, the child reported seeing “a little man” inside the laundry machine.


The story as it appeared in The Wenatchee World on November 7: Child injured in laundry mishap

And as it appeared on the KVUE website the following day: Mother rescues 5-year-old girl trapped in washing machine


“Have you ever wondered if that laundry machine you told me about is haunted, Johnny?”

— Quote from The Mangler


Previous post on spooky laundry rooms.


Addendum (4/2/12): 1-year-old Ore. boy drowns in washing machine

JSTOR Access Regained!

The juice is back on.

In the preceding post, I mentioned a comment in a MetaFilter post which indicated that access to JSTOR is possible through some public libraries, culled from a list on the JSTOR web site (the complete list of JSTOR participants is very long, and I wonder whether the list of public libraries is exhaustive; that culling must have taken some time, and I imagine it must have been exhausting). It turned out that one of those public library systems is very close to me: the Prince William Public Library System (PWPLS). I emailed PWPLS asking whether I could obtain a library card, even though I live in Arlington County rather than Prince William County. They said that I could still obtain a library card from them, because they have a reciprocal lending arrangement with Arlington.

Yesterday, I went to the nearest full-service branch of the PWPLS, the Bull Run Regional Library, and obtained a PWPLS library card. When I got back home, I tried accessing JSTOR through my new library account. It works! I was able to download some interesting papers off of JSTOR last night, including a paper about the Mexican Virgin of Guadalupe (connected with the “devouring mother” Aztec goddesses, Tonantzin and Coatlicue, which I mentioned in a previous post), as well as a primer on Data Envelopment Analysis (reminds me of the efficient frontier in Modern Portfolio Theory).

A few weeks ago, I obtained a Reader Card from the Library of Congress, hoping that I might get access to JSTOR through their electronic services. Unfortunately, the Library of Congress’ electronic resources don’t include JSTOR. However, I might still use my Reader Card for other purposes. For example, Alex Hassanthe local jazz pianist, not the baseball player — sometimes goes digging in the bowels of the Library of Congress looking for interesting sheet music; maybe I can do the same.

Antique Papers Now Freely Available on JSTOR

Well, it’s a start. Via MetaFilter: Early JSTOR publications now free. In the comments, MeFite marble says that there may be a way to obtain cheap access to JSTOR via public libraries; probably worth investigating.

A related recent MetaFilter post: The Lairds of Learning. In the comments, MeFite phalene states:

“… losing the free, degree attached connection to the journal banks is the thing I’ll miss most about finishing being a student.”

I can sympathize! The juice has been cut off.

In the same post, MeFite ovvl links to this interesting article: The Library: Three Jeremiads. In this tryptich of peeves, Robert Darnton writes:

While prices continued to spiral upward, professors became entrapped in another kind of vicious circle, unaware of the unintended consequences. Reduced to essentials, it goes like this: we academics devote ourselves to research; we write up the results as articles for journals; we referee the articles in the process of peer reviewing; we serve on the editorial boards of the journals; we also serve as editors (all of this unpaid, of course); and then we buy back our own work at ruinous prices in the form of journal subscriptions—not that we pay for it ourselves, of course; we expect our library to pay for it, and therefore we have no knowledge of our complicity in a disastrous system.

I have mentioned JSTOR earlier here and here.

Published in: on 7 September 2011 at 10:03 pm  Comments (1)  
Tags: ,

Data Without Borders: Kickoff “Data Dive”

A group named Data Without Borders plans to hold a Data Dive on October 14-16, 2011. In this event, non-profits and NGOs make it available data they’d like to have analyzed; statisticians, data scientists, and “data hackers” crunch the numbers pro bono.

The upcoming event is in NYC (do you need to be physically *in* NYC to participate?!), but they’re planning similar events in other cities, including DC. I have subscribed to their mailing list; maybe I can catch the DC event, and maybe it’ll be a total geekfest.

Markov Chain Englishoid Sentences

OK, this is really geeky, so please bear with me.

In the classic DOS game Rogue, magic scrolls have titles with gibberish names, probably generated on the fly with a simple random character generator. The Rogue random scroll name generator seems to assume independence between successive characters — i.e., the probability that a character will be generated is not influenced by the immediately preceding characters. OK, at best, it might have some rule that you can’t have too many consonants in a row. In any case, it generates character strings that don’t follow the conventions of English, and that are often unpronounceable.

To illustrate what I’m talking about, this R command generates ten random small and capital letters:

rawToChar(as.raw(sample(65:122,10)))

An example of output is:

“ZhcOJbfPxW”

(Doesn’t look much like English, and looks impossible to pronounce. Might make a reasonably secure password, though. Actually, it’s secure no longer, now that it’s displayed for all the world to see.)

I have long thought that it might be fun to generate random words with non-independence of successive characters, and based on the frequencies observed in actual English text. I.e., if the letter q were randomly generated, then the next letter is probably going to be a u and not a k, because that’s how English works in general.

Now, Markov chains have been in my mind recently, partly because they were mentioned as a possible technique for some data imputation we had to do at work (we actually ended up not using Markov chains), and partly because of a recent MetaFilter post entitled Markov Bible. And then I thought, after all these years just thinking about it, why don’t I try implementing a Markov chain that generates random Englishoid words strung out in sentences, where the frequency distribution of characters is conditional on the preceding two characters, using real English sentences as data?

So, I went ahead and implemented such an algorithm in R, with two-character memory, i.e., the probability distribution of characters generated is based on the preceding two characters. Sentences were defined as starting with a capital letter, so to initialize sentences I just took the overall frequencies of capital letters as the starting distribution. And sentences were defined as ending with either a period, a question mark, or an exclamation point. If a line of text ended with a hyphen, it was assumed that a word had been split between that line and the following line; otherwise, it was assumed that there was a blank space between the last word of that line and the following line. As a self-imposed challenge, for memory efficiency, I required the code to read the text into a memory buffer line by line, rather than reading the whole text file into memory at once.

To test my code, I took the text of H.P. Lovecraft’s story, The Dunwich Horror, and put it into a plain text file. I then read the text file into R and put the two-character transition information into a matrix, using these two lines of R code:


fileName       <- "DunwichHorror.txt"
twoCharFreqMat <- buildTwoCharacterFreqMatrix(fileName)

(buildTwoCharacterFreqMatrix() is one of the R functions I wrote; complete R code is given below.) I then generated 10 random Englishoid sentences with this loop:


# Generate 10 random sentences.
for ( k in 1:10 ) {
    sentence <- generateSentence(twoCharFreqMat,2000)
    print(sentence)
    flush.console()
}

And these are the 10 sentences that I got:

  1. “Night it thaff fly terye four er dow in’ hill Saw posprigarthe but – the fecand of the Whad of Arackeplated comithe pill.”
  2. “But of whan the few move deves wit.”
  3. “Deen.”
  4. “Earmits The hathe of dismew therethe becuen spook fable foreleatele whe hind to aritfuld boodmight ingicir and ded, of tweres The andeeteved ung thouse.”
  5. “Whal Ric in com tiong thot hing maing.”
  6. “Dr whipped, pooke rhathe fick hing to se proboadmit itaind hude feentomemany siong crome fols, ithe an – wok thad.”
  7. “Sep facle a bight ‘Thembere ninig spaing alkateley semplen tion lary canxingthe ocitelleas righ ders stin’s couraboo derip, ang and ally ons toped frouggled he yetly Pocke daters be – no evele on, fullintingle con theectiouseeps wee lumand outed preeng Mist rid the gi mys her to ch mas down ers aowde pland th re mompow thadvel doultaganereecturs cand ancer to thicoust wougend fas the hey bey valmoor Spre sed med Dr far He skulder pawfuload shand ais anced whisilly to thrumber It and Eng, bodeeled thattenibe knoistionew that’s ham thers thaven halmounry par geopy sibles they dessen knot frine and bould thated a dremomic thindern wass wriblike and matherast whin blear slever pre 751stray What parthe gan’ tred, waysidered hateds hey forge bulthe Harmisideddes ousen hattom anniall rorbione ‘He whortenclousle st ashe cows eve ninithe only we hattes inds he quing ing, anday; the mento huseer the irs bovencenter priershromed ton orniand morroo card aught lown thad compin st he his – the dicitatiour’s drand dreak tat.”
  8. “Whaterumbloseciarseem itich as olls wed thided black which alle pronstramen Curated th on.”
  9. “Bis seelle manew fres tel up opeartill ne par vintimple tame.”
  10. “It steroormead dirs mand It tir thatench, of as bout call dis horstandarty, obbatim Some fronce unds and an’ they tharrome red whirthoute Will explace reemblact strus ad, east wo le of throdunks stalf, fout that theepok andistonsturings Curat row of Spreen diall an sawas Dr expeem of and ouse vered leckly aincamis beark, and the reen th.”

Cool, no? Granted, this is nothing earth-shaking, and I am not the first person to have done something like this. I am doing this just for the sheer fun of it.

Many of the words look like they follow English conventions, sort of, and some of them are in fact real English words. But overall, it looks more like a foreign language. German or Dutch, maybe? Some of the words in sentence #5 sound… Vietnamese? I like how the commas, dashes, and semicolons seem to imply some train of thought, some reasoned argument, but you just can’t understand the words.

Note the sheer length of #7 — a “run-on” sentence. Maybe this suggests that Lovecraft tended to have longer sentences, and thus punctuation marks that end sentences are relatively (compared to other writers) rare in Lovecraft? But note that #3 is only one word. Hmm, if I had instead used Ernest Hemingway for input, would I have had more one-word sentences like #3?

Part of the reason I chose The Dunwich Horror for input was that Lovecraft wrote in a rather stilted, archaic-sounding style of English (some might call it “purple prose”), which I thought might be interesting for this little project. The Dunwich Horror has some sentences in which Lovecraft puts on a faux New England accent; this might have influenced my “transition matrix,” but I think only very slightly.

Suppose you used text from some language other than English as input to this R code? Would you be able to tell that the result was based on non-English input?

Or suppose that instead of basing the character generation probability on the previous two characters, you base it on the previous three characters? I bet that the result will really start looking like English. I must try it sometime.


In the comments of the Markov Bible post on MetaFilter, a link is given to this web page: Mark V. Shaney. This does something very similar to what I have done here, except it operates on words rather than on letters.


Check out the SCIgen web page. You can create a randomly generated computer science paper with yourself and your cat as co-authors. You might even get it accepted in a peer-reviewed (?) journal or conference.

I have mentioned SCIgen in a previous post.


Random SBIR Grant Proposal Generator

and

arXiv vs. snarXiv

both via Division By Zero, in a post entitled 12 scholarly hoaxes, randomly generated articles, and other tricky fun


Here’s my R code for generating Englishoid sentences:


getNextNonemptyLine <- function(filePointer) {
    # Keeps reading lines from FILEPOINTER until it gets a non-empty line.
    # If it hits EOF before getting a non-empty line, returns an empty string.
    nextLine <- readLines(con=filePointer, n=1)

    if ( length(nextLine) == 0 ) {
        return("")
    }

    while ( nextLine == "" ) {
        nextLine <- readLines(con=filePointer, n=1)
        if ( length(nextLine) == 0 ) {
            return("")
        }
    }

    return(nextLine)
}

getNextBatchOfChars <- function(filePointer,n) {
    # Keeps reading non-empty lines from FILEPOINTER until it has gathered
    # AT LEAST n characters.
    # If it is unable to do so, returns an empty string.
    nextLine  = n ) {
        return(nextLine)
    }

    returnVal <- nextLine
    nextLine  <- getNextNonemptyLine(filePointer)
    while ( ( length(returnVal) < n ) & ( nextLine != "" ) ) {
        returnVal <- sprintf("%s%s",returnVal,nextLine)
        nextLine  <- getNextNonemptyLine(filePointer)
    }
    if ( length(returnVal) < n ) {
        return("")
    } else {
        return(returnVal)
    }
}

getNextBatchOfAscii <- function(filePointer,n) {
    # Keeps reading non-empty lines from FILEPOINTER until it has gathered
    # AT LEAST n ASCII values.
    # If it is unable to do so, returns an empty NUMERIC array.
    emptyArray       <- as.integer(rep(0,0))
    returnVal        <- emptyArray
    nextBatchOfChars <- getNextBatchOfChars(filePointer,n)
    if ( nextBatchOfChars == "" ) {
        return(emptyArray)
    }
    nextBatchOfAscii <- as.integer(charToRaw(nextBatchOfChars))
    returnVal        <- nextBatchOfAscii
    while ( length(returnVal) < n ) {
        nextBatchOfChars <- getNextBatchOfChars(filePointer,n)
        if ( nextBatchOfChars == "" ) {
            return(emptyArray)
        }
        nextBatchOfAscii <- as.integer(charToRaw(nextBatchOfChars))
        returnVal        <- c(returnVal,nextBatchOfAscii)
    }

    # If RETURNVAL is still too short, return empty array.
    if ( length(returnVal) < n ) {
        return(emptyArray)
    } else {
        return(returnVal)
    }
}

generateFirstTwoCharacters <- function(twoCharFreqMat,charList) {
    # Generate first character, restricting it to capital letters.
    # To determine the first character, compute the frequencies of
    # capital letters across all columns.
    # Keep in mind that ASCII codes start at 0, not 1.
    oneCharFreqMat <- apply(X=twoCharFreqMat,MARGIN=1,FUN=sum)
    oneCharFreqMat[ c(1:65,91:128) ] <- 0
    numChar        <- length(oneCharFreqMat)
    oneCharCumSum  <- cumsum(oneCharFreqMat)
    oneCharCumSum2 <- c(1,oneCharCumSum[1:(numChar-1)]+1);
    randNum        <- sample(x=oneCharCumSum[numChar],size=1)
    indexSelect1   <- (1:numChar)[ ( oneCharCumSum2 = randNum ) ]

    # To determine the second character, now compute frequencies across
    # columns where indexSelect1 is the second character.
    oneCharFreqMat <- rep(0,numChar)
    for ( i in 1:numChar ) {
        columnIndex    <- ( ( i - 1 ) * numChar ) + indexSelect1
        oneCharFreqMat <- oneCharFreqMat + twoCharFreqMat[ , columnIndex ]
    }
    oneCharCumSum  <- cumsum(oneCharFreqMat)
    oneCharCumSum2 <- c(1,oneCharCumSum[1:(numChar-1)]+1);
    randNum        <- sample(x=oneCharCumSum[numChar],size=1)
    indexSelect2   <- (1:numChar)[ ( oneCharCumSum2 = randNum ) ]

    return(c(indexSelect1,indexSelect2))
}

generateSentence <- function(twoCharFreqMat,nCharLimit) {
    # Generate first two characters.
    charList       <- ( 0:127 )
    charsGenerated <- generateFirstTwoCharacters(twoCharFreqMat,charList)
    char1          <- charsGenerated[1]
    char2          <- charsGenerated[2]

    # Now keep generating characters, Markov Chain-like, until
    # we hit a period, question mark, exclamation point,
    # OR all zero probabilities.
    # ASCII number for period is 46.
    # ASCII number for question mark is 63.
    # ASCII number for exclamation point is 33.
    # But keep in mind that ASCII codes start at 0, not 1.
    #
    numChar <- nrow(twoCharFreqMat)
    while ( ( char2 != 47 )
          & ( char2 != 64 )
          & ( char2 != 34 )
          & ( length(charsGenerated) < nCharLimit ) ) {
        columnIndex    <- ( ( char1 - 1 ) * numChar ) + char2
        oneCharFreqMat <- twoCharFreqMat[ , columnIndex ]
        if ( all(oneCharFreqMat==0) ) {
            break
        }
        oneCharCumSum  <- cumsum(oneCharFreqMat)
        oneCharCumSum2 <- c(1,oneCharCumSum[1:(numChar-1)]+1)
        randNum        <- sample(x=oneCharCumSum[numChar],size=1)
        nextChar       <- (1:numChar)[ ( oneCharCumSum2 = randNum ) ]
        charsGenerated <- c(charsGenerated,nextChar)
        char1          <- char2
        char2          <- nextChar
    }
    sentence <- rawToChar(as.raw(charList[charsGenerated]))
    return(sentence)
}

buildTwoCharacterFreqMatrix <- function(fileName) {
    # There are 128*128 = 16384 bins to track.
    charList <- ( 0:127 )
    numChar  <- length(charList)
    numList  <- ( 1:numChar )

    # Initialize two-character frequency matrix.
    # Rows will be the 128 output characters.
    # Columns will be the 128*128 sequential characters to generate
    # one of the 128 output characters.
    twoCharFreqMat <- matrix(0,nrow=numChar,ncol=numChar*numChar)

    # Open text file for input.
    filePointer <- file(fileName,"r")

    # Initialize ASCIIARRAY, which contain a line of text from the input file.
    asciiArray <- getNextBatchOfAscii(filePointer,3)
    numAscii   <- length(asciiArray)

    # Process lines of text, and keep reading lines of text until EOF.
    lineCount  3 ) {
        print(sprintf("Line count = %d",lineCount))
        flush.console()

        # Fill in frequencies in twoCharFreqMat.
        k        <- 1
        cIn1     <- numList[ asciiArray[k]   == charList ]
        cIn2     <- numList[ asciiArray[k+1] == charList ]
        cOut     <- numList[ asciiArray[k+2] == charList ]
        rowIndex <- cOut
        colIndex <- ( ( cIn1 - 1 ) * numChar ) + cIn2
        twoCharFreqMat[rowIndex,colIndex] = 4 ) {
            for ( k in 2:(numAscii-3) ) {
                cIn1     <- cIn2
                cIn2     <- cOut
                cOut     <- numList[ asciiArray[k+2] == charList ]
                rowIndex <- cOut
                colIndex <- ( ( cIn1 - 1 ) * numChar ) + cIn2
                twoCharFreqMat[rowIndex,colIndex] <-
                    twoCharFreqMat[rowIndex,colIndex] + 1
            }
        }

        # Get next line of text, and prepare for next iteration through
        # WHILE loop.  If the last character was a hyphen, consider it
        # to mean that the word continues to the next line. Otherwise,
        # treat the end of line as if it were simply a space between
        # words.
        # ASCII number for blank space is 32.
        # ASCII number for hyphen is 45.
        if ( asciiArray[numAscii] == 45 ) {
            pre1 <- as.raw(asciiArray[numAscii-2])
            pre2 <- as.raw(asciiArray[numAscii-1])
        } else {
            pre1 <- as.raw(asciiArray[numAscii-1])
            pre2 <- as.raw(32)
        }
        asciiArray <- getNextBatchOfAscii(filePointer,1)
        asciiArray <- c(pre1,pre2,asciiArray)
        numAscii   <- length(asciiArray)

        lineCount   3 )

    # Close the input file.
    close(filePointer)
    print("")

    return(twoCharFreqMat)
}

# Build two-character frequency matrix.
fileName       <- "DunwichHorror.txt"
twoCharFreqMat <- buildTwoCharacterFreqMatrix(fileName)

# Generate 10 random sentences.
for ( k in 1:10 ) {
    sentence <- generateSentence(twoCharFreqMat,2000)
    print(sentence)
    flush.console()
}


Addendum (09/08/11): Here’s a random academic sentence generator. And here’s a Postmodernism Generator (every time you click on that link, you get a new random essay, reminiscent of the Sokal Affair). Here’s an example of output, complete with Lovecraftian references: Lacanist obscurity in the works of Joyce.

Hurricane Irene

OK, I’m about to leave Arlington. I will stay overnight with my sister and her family in Round Hill, VA, to wait out the storm. If all goes well, I will return to Arlington tomorrow afternoon, and maybe Monday will be regular workday.

An earthquake and a hurricane less than a week apart!


Hurricane and Cyclone (green MTG cards)

Published in: on 27 August 2011 at 9:25 am  Leave a Comment  
Tags: , , , ,