[PC] SMB Crimson Hours

This is a Mario-based game except is super bloody. The characters/monsters are basically the same as what you would expect to see from the Mario game, but this is not for kids. You are controlling a Mario who’s holding a gun and can throw grenade. The monsters (and Mario) normally explode into pieces. The BGM is similar to Mario except it’s in heavy metallic form. Bee-bee-bee-bo-bo-BOOM~


So the story starts with Browser killing Princess Peach. Mario, who came after Browser was gone, took his gun and trying to seek for revenge. The control keys are ‘z’, ‘x’ and the arrow keys. I often got them mixed up because the first game that I played used ‘z’ as jump and ‘x’ to attack. I forgot what it was, but it was on a GB-emulator.. Anyways, the following is the control keys that you should be familiar with the game:

  • X: jump
  • Z: shoot
  • ↓ + Z : throw grenade


Below is a gameplay video by some awesome guy. This will probably give you a good idea on what this game is about. There is a little note though, because this is a fan-made game, there exist some little bugs (such as you might be trapped between tubes and never able to jump out of it) which are kind of annoying. So be careful!

How to Embed Youku Videos to WordPress

First, you wanna copy the html code from youku’s share function:

<embed src="http://player.youku.com/player.php/sid/XNDYwMTEzMTY0/v.swf" allowFullScreen="true" quality="high" width="480" height="400" align="middle" allowScriptAccess="always" type="application/x-shockwave-flash"></embed>

Then, take the url and convert to the following:

[gigya src="http://player.youku.com/player.php/sid/XNDYwMTEzMTY0/v.swf" allowFullScreen="true" quality="high" width="480" height="400" allowScriptAccess="always" ]

You can change the width and height as well as the other attributes

Source: justpi’s solution

4.2 Context-Free Grammars


The formal Definition of a Context-Free Grammar

  1. Terminals are the basic symbols from which strings are formed. The term “token name” is a synonym for “terminal” and frequently we will use the word “token” for terminal when it is clear that we are talking about just the token name. We assume that the terminals are the first components of the tokens output by the lexical analyzer. Examples of terminals are the keywords if and else and the symbols “(“, and “)“.
  2. Nonterminals are syntactic variables that denote sets of strings. Examples are stmt and expr. The sets of strings denoted by nonterminals help define the language generated by the grammar. Nonterminals impose a hierarchical structure on the language that is key to syntax analysis and translation.
  3. In a grammar, one nonterminal is distinguished as the start symbols, and the set of string it denotes is the language generated by the grammar. Conventionally, the productions for the start symbol are listed first.
    1. A nonterminal called the head or left side of the production; this production defines some of the strings denoted by the head.
    2. The symbol ->. Sometimes ::= has been used in place of the arrow.
    3. A body or right side consisting of zero or more terminals and nonterminals. The components of the body describe one way in which strings of the nonterminal at the head can be constructed.

Notational Conventions

  1. These symbols are terminals:
    1. Lowercase letters early in the alphabet, such as a, b, c
    2. Operator symbols such as +, *, and so on
    3. Punctuation symbols such as parentheses, comma, and so on.
    4. The digits 0, 1, …, 9
    5. Boldface strings such as id or if, each of which represents a single terminal symbol.
  2. These symbols are nonterminals:
    1. Uppercase letters early in the alphabet, such as A, B, C
    2. The letter S, which, when it appears, is usually the start symbol.
    3. Lowercase, italic names such as expr or stmt.
    4. When discussing programming constructs, uppercase letters may be used to represent nonterminals for the constructs. For example, nonterminals for expressions, terms, and factors are often represented by E, T, and F, respectively.
    5. Uppercase letters late in the alphabet, such as X, Y, Z, represent grammar symbols; that is, either nonterminals or terminals.
    6. Lowercase letters late in the alphabet, chiefly u, v, …, z, represent (possibly empty) strings of terminals.
  3. Lowercase Greek letters, α, β, γ for example, represent (possibly empty) strings of grammar symbols. Thus, a generic production can be written as A -> α, where A is the head and α is the body.
  4. A set of productions A -> α1, A -> α2, …, A -> αK with a common head A (call them A-productions), may be written as A -> α1 | α2 | … | αK . Call α1, α2, …, αK the alternatives for A.
  5. Unless stated otherwise, the head of the first production is the start symbol.

Summary of Chapter 4


Parsers. A parser takes as input tokens from the lexical analyzer and treats the token names as terminal symbols of a context-free grammar. The parser then constructs a parse tree for its input sequence of tokens; the parse tree may be constructed figuratively (by going through the corresponding derivation steps) or literally.

Context-Free Grammars. A grammar specifies a set of terminal symbols (inputs, another set of nonterminals (symbols representing syntactic constructs), and a set of productions, each of which gives a way in which strings represented by one nonterminal can be constructed from terminal symbols and strings represented by certain other nonterminals. A production consists of a head (the nonterminal to be replaced) and a body (the replacing string of grammar symbols).

Derivations. The process of starting with the start-nonterminal of a grammar and successively replacing it by the body of one of its productions is called a derivation. If the leftmost (or rightmost) nonterminal is always replaced, then the derivation is called (leftmost (respectively, rightmost).

Parse Trees. A parse tree is a picture of a derivation, in which there is a node for each nonterminal that appears in the derivation. The children of a node are the symbols by which that nonterminal is replaced in the derivation. There is a one-to-one correspondence between parse trees, leftmost derivations, and rightmost derivations of the same terminal string.

Ambiguity. A grammar for which some terminal string has two or more different parse trees, or equivalently two or more leftmost derivations or two or more rightmost derivations, is said to be ambiguous. In most cases of practical interest, it is possible to redesign an ambiguous grammar so it becomes an unambiguous grammar for the same language. However, ambiguous grammars with certain tricks applied sometimes lead to more efficient parsers.

Top-Down and Bottom-Up Parsing. Parsers are generally distinguished by whether they work top-down (start with the grammar’s start symbol and construct the parse tree from the top) or bottom-up (start with the terminal symbols that form the leaves of the parse tree and build the tree from the bottom). Top-down parsers include recursive-descent and LL parsers, while the most common forms of the bottom-up parsers are LR parsers.

Design of Grammars. Grammars suitable for top-down parsing often are harder to design than those used by bottom-up parsers. It is necessary to eliminate left-recursion, a situation where one nonterminal derives a string that begins with the same nonterminal. We also must left-factor – group productions for the same nonterminal that have a common prefix in the body.

Recursive-Descent Parsers. These parsers use a procedure for each nonterminal. The procedure looks at its input and decides which production to apply for its nonterminal. Terminals in the body of the production are matched to the input at the appropriate time, while nonterminals in the body result in calls to their procedure. Backtracking, in the case when the wrong production was chosen, is a possibility.

LL(1) Parsers. A grammar such that it s possible to choose the correct production with which to expand a given nonterminal, looking only at the next input symbol, is called LL(1). These grammars allow us to construct a predictive parsing table that gives, for each nonterminal and each lookahead symbol, the correct choice of production. Error correction can be facilitated by placing error routines in some or all of the table entries that have no legitimate production.

Shift-Reduce Parsing. Bottom-up parsers generally operate by choosing, on the basis of the next input symbol (lookahead symbol) and the contents of the stack, whether to shift the next input onto the stack, or to reduce some symbols at the top of the stack. A reduce step takes a production body at the top of the stack and replaces it by the head of the production.

Viable Prefixes. In shift-reduce parsing, the stack contents are always a viable prefix – that is a prefix of some right-sentential form that ends no further right than the end of the handle of that right-sentential form. The handle is the substring that was introduced in the last step of the rightmost derivation of that sentential form.

Valid Items. An item is a production with a dot somewhere in the body. An item is a valid for viable prefix if the production of that item is used to generate the handle, and the viable prefix includes all those symbols to the left of the dot, but not those below.

LR Parsers. Each of the several kinds of LR parsers operate by first constructing the sets of valid items (called LR states) for all possible viable prefixes, and keeping track of the state for each prefix on the stack. The set of valid items guide the shift-reduce parsing decision. We prefer to reduce if there is a valid item with the dot at the right end of the body, and we prefer to shift the lookahead symbol onto the stack if that symbol appears immediately to the right of the dot in some valid item.

Simple LR Parsers. In an SLR parser, we perform a reduction implied by a valid item with a dot at the right end, provided the lookahead symbol can follow the head of that production in some sentential form. The grammar is SLR, and this method can be applied, if there are no parsing-action conflicts; that is, for no set of items, and for no lookahead symbol, are there two productions to reduce by, nor is there the option to reduce or to shift.

Canonical-LR Parsers. This more complex form of LR parser uses items that are augmented by the set of lookathead symbols that can follow the use of the underlying production. Reductions are only chosen when there is a valid item with the dot at the right end, and the current lookahead symbol is one of those allowed for this item. A canonical-LR parser can avoid some of the parsing-action conflicts that are present in the SLR parsers, but often has many more states than the SLR parser for the same grammar.

Lookahead-LR Parsers. LALR parsers offer many of the advantages of SLR and Canonical-LR parsers, by combining the states that have the same kernels (sets of items, ignoring the associated lookahead sets). Thus, the number of states is the same as that of the SLR parser, but some parsing-action conflicts present in the SLR parser may be removed in the LALR parser. LALR parsers have become the method of choice in practice.

Bottom-Up Parsing of Ambiguous Grammars. In many important situations, such as parsing arithmetic expressions, we can use an ambiguous grammar, and exploit side information such as the precedence of operators to resolve conflicts between shifting and reducing, or between reduction by two different productions. Thus, LR parsing techniques extend to many ambiguous grammars.

Yacc. The parser-generator Yacc takes a (possibly) ambiguous grammar and conflict-resolution information and constructs the LALR states. It then produces a function that uses these states to perform a bottom-up parse and call an associated function each time a reduction is performed.