ry-lfc-complexity

I read //Shallow draughts: Larsen-Freeman and Cameron on Complexity//, and have an interesting reaction to it not unrelated to what our ST text discusses as one of the characteristics of acceptance of an L2 speaker into an L1 speaking group. My perception is that, not only is the book by LFC on Complexity not worth reading, but this paper reviewing it might be written off and disregarded by mathematicians familiar with chaos theory. Not that I know that much about chaos theory, but this is also my own reaction to the review. It seems to me that, while Kevin Gregg captures more of the essence of chaotic systems - what he calls 'complex systems', perhaps because that is what the book he reviews calls them, he is still not convincing to me as far as his understanding of chaotic systems and their possible connection with language development and language evolution. On the other hand, if he is persuasive to experts in SLA or linguistics, then at least he is doing a service to them by discrediting a book that is weak in its arguments.

Because I don't have a deep enough background in language acquisition and linguistics research, and because I don't have the time to invest in a more thorough response as well, I only provide the skeleton of an argument for comparing deterministic chaotic systems with language utterances and language evolution. The thing that strikes me about a Chomskyan interpretation of language competence and L1 or near L1 L2 speaking is that how we speak is considered to be generated by a few basic rules or principles driven by our internal universal grammar. Ha. I should not make it sound so impersonal. This //is// how I think about L1 and fluent L2 speakers, where fluency is as defined in the ST text, and involves phonetic and grammar mastery.

Deterministic chaotic systems are similar to language speaking in that they are also governed by a simple and small set of rules. Some deterministic chaotic systems such as the sierpinski triangle "game" include explicit randomness. Others, such as Lindenmayer systems are completely deterministic.

The Sierpinski triangle attractor via a random seed point and random choices:
 * 1) Take three nonlinear points A, B, or C in a traditional euclidean plane and start with a randomly placed generating seed point p0.
 * 2) Pick any one of points A, B, or C at random and approach the chosen point halfway from the current generating point. The new location becomes the next generating point.
 * 3) Repeat step two forever and a sierpinski triangle gasket (attractor) gets outlined by the p0, p1, p2, ... pn, ... points.



There is a more abstract way than the above that generalizes this approach for other fractal attractors in the plane. It is based on iterated function systems or IFS's where the functions are affine transformations and the successive outputs pn+1 of a function f(pn) are plotted on a plane to outline given f functions' attractors. One such attractor that can be generated randomly with an IFS is the snowflake curve. Yet the snowflake curve as well as the sierpinski triangle can be generated by a deterministic system as well.

The Snowflake attractor via a deterministic Lindenmayer system aka L-system: > F --> F-F++F-F means for each generation, replace each F with F-F++F-FF.But it also means draw a line segment(F), turn left 60 degrees (-), draw another line segment (F), turn right turn right 60 + 60 = 120 degrees (++), draw a line segment (F), and so on.
 * 1) Start with the same triangle as above, but this time, focus on the segments AB, BC, and CA.
 * 2) Use the following substitution grammar. Yes, this is our first clue - it is called a substitution //grammar//.
 * 1) Repeat step two forever and the snowflake attractor appears. The image below from Wikipedia illustrates the first four generations on an equilateral triangle. You can see an [|animation that illustrates its fractal self-similarity].

I hesitate to point out the exact parallels with a "natural" human language as compared to an L-system grammar, but I think a comparison is reasonable. Otherwise, mathematicians would not have named the substitution rules a grammar. Taking into account the very same wariness that mathematics practitioners have when reading a linguist or language acquisition researcher's writings that inject chaotic systems into applied linguistics, I imagine the human language experts would also have their doubts about mathematicians using the term "grammar" to describe a mathematical system. With that said, however, because it is clear that self-similar and complex looking fractal patterns do occur in nature (e.g., cauliflower, clouds, ferns), because these can be described not only with random seeds but also with deterministic grammar-based L-system grammars, and because our brains are, after all, parts of nature as well, it is not far-fetched to consider that systems that look complex are often generated by a small finite set of rules.

But wait! This is exactly the premise of the Chomskyan understanding of universal grammar. As wonderfully complex as our mental capacities are, our brains are still finite and have limited capability so it is quite reasonable to think that brains should operate on a simple and relatively small set of rules to generate and understand language utterances. "Sharks Sharks eat eat" has its limits, but it highlights our limited real time capacity (as do complex sentence diagrams). Why shouldn't we consider language as a deterministic attractor generated by our universal grammars? We all start with different seed languages, perhaps, but based on different parameters, our UG can develop into different languages. To me, a powerful takeaway concept is how the vocabulary and metaphors we choose to use to describe language competence and second language acquisition have an incredibly strong effect on how and what we choose to research in SLA.

Your turn. Do you think some kind of L-system type grammar -driven approach to SLA might be helpful? Obviously, I do. First, check out the Wikipedia article on (simple) [|L-systems and their grammars] because they're also visual and pretty. Could these provide some direction for future SLA research? I suspect so. After reviewing the article on L-systems, the Wikipedia article on [|Formal language] looks like a promising read. As usual, there are others that have thought about and are thinking about these things already!