1 Introduction
This paper analyses two opposing prevalent phenomena – palatal assimilation (e.g. si → ɕi) and palatal dissimilation (e.g. ji → i) – which frequently occur between adjacent positions and which are both typically analysed by referring (either explicitly or implicitly) to the syllable, e.g. the palatality of segment X must be realised in segment Y iff X and Y are linearly adjacent in the same syllable. Adjacency is formally defined as a precedence relation that is lexically encoded in the segments forming a CV sequence,1 while syllables are taken to be constituents formed by dependency relations between C (onset) and V (nucleus), where C is a dependent of V.
The relational properties between units – precedence and dependency – are both regularly employed in phonology to explain recurrent phenomena and aspects of phonological architecture. In the interests of representational minimalism, however, some recent theories of representation dispense with one of these two relational properties and describe phonological phenomena by referring only to the other property. There are two opposing views: (i) the strict CVCV model of Government Phonology (which may be dubbed Dependency-free Phonology) developed by Scheer (2004; 2008) and his colleagues, which abandons dependency and describes phonological phenomena by referring only to precedence; and (ii) Precedence-free Phonology developed by Nasukawa (2014; 2015ab), which abandons precedence and describes phonological phenomena by referring only to dependency.2 Both approaches have their own merits, each making phonological descriptions theoretically more restrictive. And the existence of these two approaches highlights the importance of structural representations, which ultimately seem to be an essential component in all types of phonological theory whether representation-based or computation-based. Following Nasukawa (2014; 2015b), this paper takes the view that precedence relations are not encoded in representations, but that all dependency relations are specified in morpheme-internal lexical structure. This view relies on the following two premises (Nasukawa 2015b: 213).
- (1)
- a.
- Morpheme-internal phonological structure consists not of segment-based precedence information but of a set of features which are hierarchically concatenated.
- b.
- Phonology is a module which not only interprets fully concatenated strings of morphemes but is also responsible for lexicalization (building the phonological structure of morphemes in the lexicon).
The premise in (1a) is conceived within a strictly monostratal model of phonology (Nasukawa 2011; 2012) in which dependency relations between units – which are employed in other modules of the grammar – are indispensable even in morpheme-internal structure, so that any information relating to the ordering of segments is representationally redundant. Instead, dependency relations between phonological categories are sufficient to account for phonological phenomena. This non-precedence-based structure implies the existence of embedded categories in morpheme-internal phonology. In accordance with the premise in (1b), then, phonology functions not only as an interpretive device (Translator’s Office; cf. Scheer 2008; 2011) but also as a computational module which concatenates phonological categories (or more precisely, ‘features’) to determine the phonological shape of morphemes. In other words, a syntax-like structure-building operation takes place in phonology during the course of lexicalization. In this precedence-free and feature-concatenation-based model of phonological representation (Nasukawa 2011; 2014), precedence is not a formal property in phonology; rather, it is viewed as nothing other than a by-product of phonetic interpretation relevant to the sensorimotor systems. On this basis, the division between phonology and its external systems may be said to parallel the division between syntax and performance systems.
In the context of a precedence-free phonological structure, this paper demonstrates how we can account for two types of palatality-related phenomena – palatal assimilation and palatal dissimilation in Japanese – which are conventionally analysed by referring to precedence relations. The structure of the present paper is as follows. §2 introduces the phonological primes which play a central role in phonological representation and discusses the nature of the morpheme-internal structure employed in Precedence-free Phonology. Then §3 describes the two opposing palatality-related effects – palatal assimilation and palatal dissimilation in Japanese, analysing both in terms of dependency relations between primes rather than by referring to precedence relations.
2 Precedence-free Phonology
2.1 Phonological primes
The approach described in Nasukawa (2014; 2015ab) (cf. Nasukawa 2011; 2012) denies the existence of precedence relations between units of phonological representation, eliminating not only units such as CV units, skeletal positions and Root nodes (which have been assumed to carry properties relating to precedence) but also traditional prosodic units such as onsets, nuclei and codas (although these may still be informally referred to for the sake of ease of understanding). Instead, features are regarded as the units that play a central role in building phonological structure. This contrasts well with orthodox phonological models where features are merely inherent attributes of a segmental position and segments (more precisely, CV units or skeletal positions) are treated as basic building blocks for constructing phonological structure. In this model features take the place of prosodic constituents like onset and nucleus, since features (which are, in phonological terms, the smallest units) themselves function as the basic building blocks of phonological structure. At the same time, a feature may also function as the head of a ‘nuclear’ expression, and by adding another feature to this head feature a complex expression is formed in which the additional feature takes the role of a dependent/complement. The feature model which most clearly illustrates this approach is the version of Element-based feature theory developed by Nasukawa (2012; 2014), in which each feature or elements is monovalent and fully interpretable on its own – to be phonetically realised it does not require the support of other elements. It follows that there is neither any universally fixed matrix of features nor any template-like feature organization. In accordance with certain principles, features can combine freely with one another.
In element-based feature theory, melodic structure is represented using the six monovalent elements |A I U ʔ H N|. These are to be understood as mental objects which are active in all languages. Conceived of within the perception-based view of melodic structure employed in the work of Jakobson (Jakobson et al. 1952; Jakobson & Halle 1956), elements map onto the phonetic exponent. The six elements are described in Table 1, along with their typical acoustic signature (Harris & Lindsey 2000; Harris 2005).
Label | Spectral shapes | |
---|---|---|
|A| | ‘mass’ | mass of energy located in the center of the vowel spectrum, with troughs at top and bottom |
|I| | ‘dip’ | energy distributed to the top and bottom of the vowel spectrum, with a trough in between |
|U| | ‘rump’ | marked skewing of energy to the lower half of the vowel spectrum |
|ʔ| | ‘edge’ | abrupt and sustained drop in overall amplitude |
|H| | ‘noise’ | aperiodic energy |
|L| | ‘murmur’ | broad resonance peak at lower end of the frequency range |
In principle, the elements may be employed in both consonant and vowel expressions. Table 2 shows the different phonetic categories associated with each element according to whether it appears in a consonant or a vowel (Nasukawa & Backley 2008; Nasukawa 2014: 3).
Label | Manifestation as a consonant | Manifestation as a vowel | |
---|---|---|---|
|A| | ‘mass’ | uvular, coronal POA | non-high vowels |
|I| | ‘dip’ | palatal, dental POA | front vowels |
|U| | ‘rump’ | labial, velar POA | rounded vowels |
|ʔ| | ‘edge’ | oral or glottal occlusion | creaky voice (laryngealised Vs) |
|H| | ‘noise’ | aspiration, voicelessness | high tone |
|L| | ‘murmur’ | nasality, obstruent voicing | nasality, low tone |
The first three elements |A I U| form a natural group of ‘resonance’ elements which typically describe vowel quality, prosodic phenomena such as pitch and intonation patterns, and also place of articulation (POA) in consonants. The other three elements |ʔ H N| are associated with non-resonance properties such as occlusion, aperiodicity and laryngeal-source effects.
2.2 Representing vowels in Japanese
Nasukawa (2014) claims that what is traditionally assumed to be a nucleus is replaced by one of the three resonance elements |A|, |I| or |U|, this language-specific choice determining the phonetic quality of a melodically empty nucleus in the given language. (Traditionally, it is assumed that an empty nucleus is pronounced as one of the central vowels ə, i(ɨ) or ɯ, according to parametric choice.) English selects |A|, which is realised as ə in its acoustically weak form, while Yoruba chooses |I| (realised as i) and Japanese selects |U| (realised as ɯ in the east part of Japan, as u in the west) Figure 1.
Thus languages divide into three types according to their baseline resonance: |A|-type (ə), |I|-type (i) and |U|-type (ɯ) (Nasukawa 2014: 13).
Given that the weak vocalic forms ə, i and ɯ are each represented by a single element |A|, |I| and |U| respectively, the question arises as to how the near-universal corner vowels a, i and u are represented structurally. In the case of |A|-type languages such as English, the baseline (which functions as a nucleus/V) takes another element as its dependent. If the baseline and |I| are concatenated, the whole expression is phonetically realised as i, and if the baseline and |U| form a set, the expression manifests itself as u. Furthermore, the set which consists of the baseline and |A| is phonetically interpreted as a. These structures may be represented as follows.
The leftmost structure in Figure 2 shows the representation of the English baseline, a sole |A|, which determines the quality of unstressed vowels and of the default epenthetic vowel, both of which are phonetically manifested as ə.3 On the other hand, the baseline resonance may also have the acoustic pattern of an additional (dependent) element superimposed on to it: for example, in the structures for a, i and u respectively, the dependents |A|, |I| and |U| have acoustic patterns with greater prominence than those of their baseline. These phonetic values a, i and u are the exaggerated forms of ə, i and ɯ respectively (where ə, i and ɯ are to be understood as the phonetic interpretation of |A|, |I| and |U| as bare heads) (Nasukawa 2014: 11–12). Following notational conventions, the head occupies the position at the top of the tree diagram and labels the entire structure. The corresponding relations between structural head-dependency and phonetic prominence are described and developed in Nasukawa & Backley (2015): heads are important and unmarked for structure-building but they lack phonetic prominence, whereas dependents are unimportant for structure-building but are phonetically more prominent. The same situation is found in other modules of the grammar, including syntax, where the default stress pattern in the verb phrase [kissed Mary] of [John [kissed Mary]VP] shows that the dependent (complement) of the phrase [Mary] is phonetically more prominent than the head [kissed]. For a detailed discussion, the reader is referred to Nasukawa & Backley (2015).
The same configuration also applies to |I|-type and |U|-type languages. One example is the |U|-type language Japanese, which will be discussed in the latter half of this paper. In the case of Japanese, |U| is the baseline (head), which is phonetically realised as the unrounded vowel ɯ when there is no dependent element.
When the baseline takes |A|, |I| or |U| as a dependent,4 the acoustic pattern (phonetic exponence) of this dependent element overrides that of the baseline. As a result, the structures are phonetically realised as a, i, and ɯ respectively,5 as shown in Figure 3. The remaining two vowels of Japanese, e and o, are represented as follows.
In the domain marked out with a dotted line, the set where |I| (solely phonetically interpreted as i) has |A| (interpreted as a) as a dependent is phonetically realised as the mid front vowel e: in acoustic terms, the additional (dependent) ‘mass’ pattern is added to the (structurally headed) ‘dip’ pattern. In this configuration, the dependent ‘mass’ pattern is more prominent than the head ‘dip’ pattern since |A| is the most embedded dependent, making it phonetically more prominent than the head. The same is true in the structure for o in Figure 4: in the |U|-headed set of |U| and |A|, the dependent |A| is acoustically more prominent than the head |U|.
Structures which are the reverse of those in Figure 4 are also employed in Japanese, as given in Figure 5: the |A|-headed set consisting of |A| and |I| is phonetically interpreted as the light diphthong ja (ĭa) rather than as a monophthong, while the |A|-headed set consisting of |A| and |U| phonetically manifest itself as the light diphthong ɰa (ɯ̆a).
The remaining light diphthongs permitted in Japanese are represented as follows.
Along the same lines, Figure 6 shows how the |U|-headed set consisting of |U| and |I| is phonetically interpreted as the light diphthong ju ((ĭu), while the whole structure is realised as the light diphthong jo ((ĭo) in which the |A|-headed set comprising |A| and |I| (phonetically interpreted as ja) is embedded in the dependent |A| part of the |U|-headed set consisting of |U| and |A| (phonetically interpreted as o).
As discussed in Nasukawa (2015a), the above structures find support in the observation that jV of CjV (rather than Cj) behave as constituents in phonological phenomena, as demonstrated below (where phonetic symbols in the brackets are phonetically realised forms).
- (2)
- a.
- Possible initial CjV
- kjɑ
- ɡjɑ
- sjɑ [ɕɑ]
- zjɑ [ʑɑ]
- tjɑ [ʨɑ]
- hjɑ [çɑ]
- pjɑ
- bjɑ
- njɑ
- mjɑ
- rjɑ
- kju
- ɡju
- sju [ɕu]
- zju [ʑu]
- tju [ʨu]
- hju [çu]
- pju
- bju
- nju
- mju
- rju
- kjo
- ɡjo
- sjo [ɕo]
- zjo [ʑo]
- tjo [ʨo]
- hjo [ço]
- pjo
- bjo
- njo
- mjo
- rjo
- b.
- Impossible initial CjV
- *kji
- *ɡji
- *sji
- *zji
- *tji
- *hji
- *pji
- *bji
- *nji
- *mji
- *rji
- *kje
- *ɡje
- *sje
- *zje
- *tje
- *hje
- *pje
- *bje
- *nje
- *mje
- *rje
The pattern emerging from (2) is that a front vowel cannot follow a Cj sequence in Japanese. This is often taken to be a co-occurrence restriction which bans a sequence comprising the palatal glide j and a front (palatal) vowel (i/e). Yet in fact, not only CjV sequences but also jV sequences are subject to the same distributional restriction.
- (3)
- a.
- Possible initial jV
- ja
- ju
- jo
- b.
- Impossible initial jV
- *ji
- *je
Given that the co-occurrence restriction works within a domain/constituent, as demonstrated by consonant clusters and diphthongs cross-linguistically, it follows that a CjV sequence must be syllabified as C-jV, where j is part of the nucleus rather than part of the onset (Cj-V). This is motivated by the fact that any consonant in the Japanese consonant inventory (except for j, w and the placeless nasal ɴ) may appear before a permitted jC sequence, i.e. the same distributional freedom as a single consonant that precedes any of the five monophthong vowels a, i, u, e, o. To capture this distributional restriction involving jV sequences, Nasukawa (2015a) claims that jV as a whole forms a nucleus rather than a CV sequence. That is, jV is a light diphthong (ĭa) of the kind which is also found in languages such as Korean and Chinese.
This view is also supported by the way these sounds are written in the Japanese syllabary, where kja (きゃ) is represented as a combination of き(ki) and a subscriptゃ(ja): i.e. ki is modified by the addition of ja.
2.3 Representing consonants in Japanese
Before proceeding to the analysis of Japanese palatalisation in §3, let us clarify how consonants are represented in the precedence-free model. It is assumed that consonants are structurally dependent on vowels, since vowels are generally taken to be obligatory in constituents such as ‘syllable’ and ‘word’ whereas consonants are optional. From this it follows that the vocalic part of the constituent forms its head (and is therefore unmarked and essential for structure-building), while the consonantal part takes the role of a dependent (and is therefore unimportant for structure-building).
On this basis – and in light of the above discussion on the relation between structural head-dependency and phonetic prominence – it may be claimed that consonants are more prominent than vowels since consonantal properties tend to function as phonetic cues to prosodic information (e.g. English aspiration as a marker of foot-initial position) while vowels have no comparable function (e.g. despite being more sonorous than consonants, vocalic properties are unmarked and do not show any acoustically-defined abrupt changes). This is consistent with the point made in §2.2 that heads lack phonetic prominence while dependents are phonetically more prominent.
Let us return to the argument that the part of a constituent which is phonetically more prominent and/or contrastively richer should occupy a more deeply embedded position. We may represent the consonantal part using the structure in Figure 7, where elements under a vertical line are heads and those under a slanting line are dependents.
As illustrated above, the consonantal part is dominated by the vocalic part: in the left-hand structure in Figure 7, the consonantal |H|-headed set of three elements (which phonetically manifests itself as t) is dependent on the baseline |U| that is the ultimate head of the expression. And the |U|-headed set of |H|˝ and |U| (= |U|ˊ) takes |A| as its dependent at the next level down. As discussed in §2.2, the part consisting of |U| and |A| is phonetically interpreted as a since the head |U| is a resonance baseline, the acoustic quality of which is masked by that of its dependent element. As a whole, the structure on the right-hand side is realised phonetically as ta ‘rice field’.
As mentioned earlier, and as discussed in Nasukawa (2011; 2014; 2015a; b) and Nasukawa & Backley (2008), representations of this kind make no reference to precedence relations between the units within phonological representations. There is therefore no difference between the two structures in Figure 7: both exhibit the same dependency relations between the units in their respective structures. In this model, as argued in Nasukawa (2011) who discusses in detail two types of dependency (endocentric dependency and exocentric dependency), linear precedence is to be regarded as the natural result of performance systems interpreting the hierarchical structure present in phonological representations.
Referring to the configurations in Figure 7, the element structures permitted to appear in the consonantal part6 are given below.
Since the noise element |H| is present in all obstruents, it serves to define the class of obstruents. (Conversely, the absence of |H| indicates a sonorant expression.) |H| is deemed the head of the consonantal expression in which it appears, while the nature of the hierarchical relation whereby |I|, |U| or |A| is dominated by |H| determines any acoustic effects relating to place of articulation. In addition, a whole expression is identified as a stop or an affricate if the edge element |ʔ| is present, as in Figure 8, while the same expression without |ʔ| is interpreted as a fricative, as shown in Figure 9.
Let us limit the present discussion to palatality (in representational terms, the property associated with the |I| element), since this will be the focus of §3. The element |I| is found in ʨ/ʥ (Figure 8), in ɕ (Figure 9) and in ç (also Figure 9); in all of these, |I| is in the most deeply embedded part of the structure, and for this reason, is interpreted as palatality.
Using the melodic structures just outlined, the next section describes two seemingly opposing phenomena involving palatality: palatal dissimilation and palatal assimilation.
3 Two opposing phenomena: palatal dissimilation and palatal assimilation
3.1 Palatal dissimilation
The process of palatal dissimilation in Japanese (see §2.2) imposes a ban on sequences of a palatal glide j followed by a front (palatal) vowel (i/e).
- (4)
- a.
- Possible jV
- (C)jɑ
- (C)ju
- (C)jo
- b.
- Impossible jV
- *(C)ji
- *(C)je
The prohibited sequences *ji and *je are instead produced as i and e respectively (e.g. idiɕɕu < jɪdɪʃ ‘Yiddish’ and eritsiɴ < jeltsin ‘Yeltsin (Boris)’). This process is typically seen as a co-occurrence restriction, which makes appeal to the OCP (Obligatory Contour Principle) or Identity Avoidance since it disallows sequences of j plus i/e (Nasukawa 2015b; cf. Yip 1988; 1998). In terms of element structure, Japanese *ji and *je are represented as follows.
Recall that the structural part containing the vocalic set has one of the three elements |I|, |U| or |A| as its head, and that in the case of Japanese it is |U| which dominates and provides the baseline for the entire structure. In this vocalic part, only a single |I| element can appear (*|I I|: Nasukawa 2015b). Thus, by suppressing an |I| element which is more deeply embedded, as shown on the left in Figure 10, we arrive at a structure identical to that in the second from the right in Figure 3; this resulting structure is phonetically interpreted as i. The same applies to the right-hand structure in Figure 10: suppressing the most deeply embedded |I| leaves an expression which is interpreted as e, the same as in the left-hand structure in Figure 4.
In loanword phonology, on the other hand, *je is occasionally accommodated as ie rather than e as in ieti (*eti) < jeti ‘Yeti’ and iereɴ (*ereɴ) < jelən ‘Yellen (Janet)’. The unpacking of je to ie may be analysed as follows.
Rather than by suppressing the most deeply embedded element |I| as in the right-hand structure in Figure 10, the structure for ie is generated by breaking the input structure at the highest level and placing the most deeply embedded |I| (as in Figure 11, left-hand side) in the dependent position of |U|Dep, which is the first dependent of |U|Head.
In addition to the alternations *je > i and *je > ie, je (unlike *ji) is occasionally allowed in recent loanwords (e.g. jesu < jes ‘yes’ and jeroo < jeləʊ ‘yellow’). On the other hand, *ji is disallowed in all word types including loanwords (ijaa < jɪə ‘year’ and idiɕɕu < jɪdɪʃ ‘Yiddish’). Under the proposed representations in Figure 10, the difference between *ji and *je is attributed to the presence/absence of |A|: the structure which consists of only |I|s in the domain in question (the case of ji in Figure 10, left-hand side) is strictly prohibited by the requirement of Identity Avoidance *|I I|. On the other hand, the structure which contains |A| in addition to the two |I|s in the domain in question (the case of je in Figure 10, right-hand side and Figure 11 may be interpreted depending on various factors such as donor language and word frequency: in some recent loanwords, the existence of |A| flanked hierarchically by the |I|s (as in Figure 10, right-hand side) protects the otherwise ill-formed *|I I| structure; while in others, the existence of |A| is transparent and renders the entire structure ungrammatical.
As we will see in the following section, |I| can also appear in the non-vocalic domain too; specifically, |I| is allowed to occupy a position within the domain where the non-resonance element |H| is the head (i.e. |H||ʔ||L|). It is possible for the same element to appear twice in an expression if the two tokens of that element reside in different (vocalic and consonantal) parts of the structure (the reader is referred to the discussion preceding and following Figure 13).
3.2 Palatal assimilation as SEARCH |H| and COPY |I|
The palatal dissimilation process just described for Japanese is observed in the vocalic set, while the opposite process of palatal assimilation involves palatality in both the vocalic and the dependent consonantal sets. The process itself targets only coronal obstruents (s, z, t, d) and the glottal fricative (h) when they precede the front high vowel i or the light diphthong jV. This is illustrated in (5). (Note that Japanese word-initial z is realised as [ʣ], which requires further explanation that is beyond the scope of this paper. In (5b) and (6b), therefore, the issue is avoided by only showing examples in which z appears word-internally.)
- (5)
- Palatalisation before i
- a.
- s
- sɑkɑ
- sikɑ
- suikɑ
- seki
- soko
- ‘slope’
- ‘deer’
- ‘watermelon’
- ‘cough’
- ‘the bottom’
- → [ɕi]
- b.
- z
- ɑzɑ
- kɑzi
- kɑzu
- kɑze
- nɑzo
- ‘birthmark, bruise’
- ‘fire’
- ‘number’
- ‘wind’
- ‘riddle, puzzle’
- → [ʑi]
- c.
- t
- tɑkɑsɑ
- tikɑrɑ
- tuki
- teki
- toki
- ‘height’
- ‘strength’
- ‘moon’
- ‘enemy’
- ‘time’
- → [ʨi]
- → [ʦɯ]
- d.
- h
- hɑmɑ
- hiru
- hutɑ
- heso
- hoɴ
- ‘beach’
- ‘daytime’
- ‘lid’
- ‘navel’
- ‘book’
- → [çi]
- → [ɸɯ]
- (6)
- Palatalisation before jV
- a.
- s
- sjɑkɑ
- sjuuki
- sjoki
- ‘the Buddha’
- ‘period, cycle’
- ‘secretary’
- → [ɕ]
- → [ɕ]
- → [ɕ]
- b.
- z
- kuzjɑku
- jɑzjuu
- kuzjo
- ‘peafowl’
- ‘wild animal’
- ‘extermination’
- → [ʑ]
- → [ʑ]
- → [ʑ]
- c.
- t
- tjɑ
- tjuui
- tjooshi
- ‘tea’
- ‘lieutenant’
- ‘condition’
- → [ʨ]
- → [ʨ]
- → [ʨ]
- d.
- h
- hjɑku
- hjuuɡɑ
- hjoo
- ‘a hundred’
- name of city
- ‘table’
- → [ç]
- → [ç]
- → [ç]
All the target segments of palatalisation are obstruents, which suggests that the noise element |H| (for obstruency) is crucial to the process. Note that this quite unlike the palatal dissimilation discussed above, which takes place in the vocalic domain where |H| is absent. Also, from the observation that palatal assimilation targets only coronal obstruents (s, z, t, d) and the glottal fricative (h) when they precede i or jV we can expect there to be something which is common to the internal structures of the target segments.
As seen in the three rightmost structures in Figure 12, the target segments (t, s/z, h) all have the noise element |H|, and in addition, all lack the rump element |U|, whereas non-target segments such as p/b and k/ɡ do contain |U|. In other words, palatalisation affects a consonantal set which has |H| but no |U| and which is dominated by a vocalic set containing |I| (the source of palatality). Consider t-palatalisation as an example.
As illustrated above, the process which palatalizes a consonantal structure may be analysed as palatality-spreading/copying on condition that this structure is recognized as obstruent. In element terms, it can be said that the existence of |H| (which defines obstruency) forces the most deeply embedded |I| in the vocalic set to COPY itself to the most deeply embedded part of the consonantal set. This may be formally expressed as in (7).
- (7)
- SEARCH |H| and COPY |I|
- SEARCH |H| and COPY the V dependent |I| in the most deeply embedded part of the |H| domain.
In the case of the sequence ti, the dependent |I| in the vocalic set (i.e. the only – and therefore, the most deeply embedded – token of |I|) copies itself onto the most deeply embedded part of the consonantal set containing |H|.
At this point let us address the following questions arising from this analysis.
- (8)
- a.
- Why is the presence of |H| required for |I|-duplication?
- b.
- Why must the source |I| and the duplicated |I| both occupy the most deeply embedded part of their respective domains?
Regarding (8a), ET shows a clear connection between |H| (obstruency) and |I| (palatality) in that both are united as members of the group of ‘light’ elements. As discussed in detail in Backley & Nasukawa (2009) and Backley (2011), the ‘light’ elements comprise the set |I H ʔ| while the remaining elements |U A N| are ‘dark’. Here it is claimed that palatalisation is driven by a mechanism in which the light element |I| seeks out another light element |H|, the former being copied onto a position where the latter is already present Table 3.
‘light’ | ‘dark’ | ||
Non-resonance | ʔ | ||
Source | H | L | |
Resonance | Colour | I | U |
A |
The reason why |I| and |H| behave as a set in the process in question is that they freely appear in both consonantal and vocalic domains whereas |ʔ| is typically limited to consonantal domains. Because the process in question (palatalisation) is an interaction between consonantal and vocalic domains, only elements which can function in this way naturally form a group. The same is true in the ‘dark’ group in some systems such as native (Yamato) Japanese, where |U| (labiality) and |L| (nasality/voicing) behave as a set: |U| can be employed in a single consonantal segment when it is accompanied by |L| as in m (|U L ʔ|) and b (|U L ʔ H|) while |U| with no |L| can only appear in a geminate consonant as in -pp- (|U ʔ H|).
As for (8b), the property that is copied occupies the most deeply embedded position (terminal dependent) in the whole structure, being subject to three levels of embedding. As such, it is able to maximize the effects of |I| percolating through the entire domain to ensure the most effective agreement of the active property.
The operations SEARCH |H| and COPY |I| in (7) also work in fricatives, as these also contain |H| in their structures. The same palatalisation process is observed in the case of fricatives, as illustrated below.
Unlike the stop t in Figure 13, the fricative s in Figure 14 has no |ʔ|; yet the V-dependent |I| is still copied to the most deeply embedded part of the |H| domain. Additionally, the glottal fricative h, consisting of a sole |H| element, is also a target for |I|-copying to its dependent position when the SEARCH and COPY operations apply, as shown in Figure 15.
By contrast, the |H|-headed set that has |U| is immune to palatalisation, as illustrated Figure 16.
Even though the conditions for SEARCH and COPY are met, the presence of the rump element |U| prevents |I| from being copied to the |H|-headed domain. This is attributed to the following co-occurrence restriction which is operative in Japanese.
- (9)
- *|I U|
- |I| and |U| cannot appear in the same domain.
Notably, (9) applies not only to the consonantal (|H|-headed) domain but also to the vocalic (|U|-headed) domain. As discussed in §2.2, Japanese disallows the combination of |I| and |U| in a vocalic domain (note that the baseline element |U|, which is the ultimate head of the domain, does not count as a dependent |U|). And as shown below, what applies to p also applies in the case of k, since k also contains |U|).
The co-occurrence restriction in (9) prevents the velar stop k from being copied to the most deeply embedded part of the |H|-headed domain. (Note that k is phonetically palatalized, but the degree of palatalisation is perceptibly different from t, s, z, h.)
In Standard Japanese, thus, the arguments of COPY and the co-occurrence restriction are |I| and *|I U| respectively. Arguments for COPY are parametrically selected: |U| for rounding assimilation (e.g. round harmony in Turkish and Finnish), |A| for height assimilation (e.g. height harmony in Chichewa and Basque), |L| for nasal/voicing assimilation (e.g. postnasal voicing in Zoque and Japanese), |H| for voiceless assimilation (e.g. English and Swedish) and |ʔ| for stop gemination (e.g. Italian and Danish) (Harris 1994; Harris & Lindsey 1995; Nasukawa 2005; Backley 2011). As for the co-occurrence restriction, not only *|I U| (both of which are ‘colour’ elements) but also *|H L| (both of which are ‘source’ elements) are observed cross-linguistically (Harris 1994; Nasukawa & Backley 2005; Backley 2011). In principle, any combination of elements has the potential to act as a co-occurrence restriction although in practice there are clear tendencies: *|ʔ H| is marked although it does function when no other elements are present.
Returning to the copying of |I| (palatalisation), unlike Figure 17, some dialects of Japanese exhibit the palatalisation of velar stops: e.g. cɨŋko ‘safe’ in the Shiroishi dialect (kiŋko in Standard Japanese) and ɨɟɨ ‘railway station’ in the Morioka dialect (eki in Standard Japanese). This is illustrated as follows.
In this dialect, unlike Figure 17, COPY(|I|) requires |I| in the vocalic domain to be copied at the highest dependent part (rather than the most deeply embedded part), which forces |U| from its position. As a result, as shown on the right in Figure 18, the structure of the consonantal domain is phonetically interpreted as c.
The same process can be found in the case of ci < ti (e.g. cɨkara ‘power, force’ in the Morioka dialect (ʨikara in Standard Japanese)) where (according to the requirement of *|I U|) |A| in the highest dependent position of the consonantal domain is forced out and instead |I| in the vocalic domain is copied in the position Figure 19.
Like Standard Japanese, however, no bilabial stop palatalisation is observed in this dialect since two |U|s in the consonantal domain blocks COPY (|I|), as illustrated Figure 20.
Another question to be addressed is why the vowel e, which also contains |I|, does not trigger palatalisation Figure 21.
As illustrated in Figure 4 in §2.2, e has |I| as a head and |A| as a dependent, so the most deeply embedded element is |A| rather than |I|. Since the operation COPY in (7) targets the most deeply embedded |I| in the V domain, the head |I| in e cannot be a source for the copying operation. As such, e fails to palatalize a preceding coronal obstruent or glottal fricative. However, in some dialects of Japanese (e.g. in Kyushu) the sequence se manifests itself as ɕe, suggesting that in those dialects it does not matter if the source |I| is in the most deeply embedded part of the structure or not: parametrically any |I| is copied to the consonantal set if it is present in the dominant vocalic set.
A final point to note is that the string ɕe is permitted in loanwords, in which case ɕ is not the result of palatalisation triggered by the following e: rather, it is simply a sequence consisting of ɕ plus any vowel (a, i, u, e, o), which is possible in Japanese loanwords.
4 Conclusion
This paper has analysed two processes involving palatality: (i) palatal dissimilation and (ii) palatal assimilation. While the latter has traditionally been accounted for by referring to precedence relations between segments, in this paper it has been reanalyzed within the context of Precedence-free Phonology, which makes no reference to precedence relations in representations, and instead, employs only head-dependency relations between units (Nasukawa 2011; 2014; 2015a; b).
In this model, the traditional notions of progressive and regressive assimilation are interpreted in terms of different COPY movements: progressive (C to V) involves copying to a higher position in the hierarchical structure, while regressive (V to C) requires the opposite movement to a lower position. The latter is typologically more common (Bhat 1978; Bateman 2007), suggesting that it is more natural for movement to target a position in the same domain as the source.
Palatal dissimilation (de-palatalisation) takes place between sonorants: the process affects sequences of j followed by i or e. As a result of de-palatalisation the banned sequences *ji, *Cji, *je, *Cje are produced as i, Ci, e and Ce respectively. Palatal assimilation, on the other hand, targets coronal obstruents (s, z, t, d) and the glottal fricative (h) when they precede the front high vowel i.
On this basis, the only difference between the two processes concerns the presence/absence of obstruency. In terms of element-based representations, segments with |H| (noise = obstruency) undergo palatalisation (COPY |I| (|I|-agreement)) while the sonorant j, which has no |H|, is subject to de-palatalisation (*|I I|). In accordance with the general requirement of Identity Avoidance, the same element |I| cannot appear twice in a domain; so in the case of de-palatalisation two tokens of |I| in the sonorant j are disallowed and one of them (the dependent |I|) must be suppressed. However, another |I| is allowed to appear in the |H|-headed domain, since an element may be freely copied to a position outside of its own consonantal/vocalic domain. So under the operations SEARCH |H| and COPY |I|, |I| is specified in the |H|-headed domain if it is already present in the dependent part of the associated vocalic set; then palatalisation is established. This is consistent with the analyses of nasal and vowel harmony analysed in Nasukawa (2005), where a similar mechanism is discussed which refers to dependency relations in prosodic structure.
This analysis succeeds in accounting for phenomena in which no palatalisation takes place, i.e. when the segments concerned are labials or velars. Employing the co-occurrence restriction *|I U| which in the case of Japanese functions in the vocalic set (i.e. *|I U| bans segments such as y and ø, which contain both |I| and |U|), I have claimed that the restriction also applies to the consonantal (|H|-headed) domain. In this way, the constraint may be said to apply across the board within a given language.
At no point have the above analyses made any reference to precedence relations between structural units. Further research will now be required on other phenomena that have traditionally been analysed by referring to precedence relations.
Notes
- Alternatively, a precedence relation could be formed between neighbouring X slots, or root nodes, or between features within a contour segment such as an affricate or prenasalised obstruent. [^]
- The notion of precedence has been questioned in the literature such as Anderson (Dependency Phonology; 1987), van der Hulst (2010), Fujimura (the Converter/Distributor model; 1996) and Haraguchi (the Set Theory of the Syllable; 2003). Like the present model, they all exclude the notion of precedence from representations while unlike the present proposal, prosodic categories (e.g. onset, nucleus, syllable, foot) are formally retained. [^]
- As discussed in Nasukawa (2014), all languages have one of the baseline resonance qualities (|I|, |U| or |A|), which appears as a default epenthetic vowel. The identity of this default vowel is typically revealed through loanword phonology. [^]
- Multiple appearances of the same element in a segment-sized structure are characteristic of Particle Phonology (Schane 1984; 1995; 2005). The idea of head-dependency relations between elements can be traced back to Dependency Phonology (Anderson & Ewen 1987). [^]
- Note that there is no phonetic difference between the manifestation of the sole baseline (ɯ) and the realisation (ɯ) of the set consisting of the baseline plus a dependent |U|. Phonologically, however, they display different behaviour: unlike the latter, the former is restricted to verb endings and is insensitive to phonological processes. The reader is referred to a detailed discussion in Nasukawa (2011). [^]
- The consonants of Japanese are as follows (distinction between phonemic and allophonic not shown). [^]
Acknowledgements
An earlier version of this work was presented at Palatalization Conference (University of Tromsø/CASTL, Norway, 5 December 2014). I thank the participants at the conference for their constructive comments. I would also like to thank Phillip Backley, Martin Krämer and three anonymous reviewers for their insightful comments, one of whom kindly provided me with additional data on dialectal and loanword forms. This research was supported by the Japanese government (Grant-in-Aid for Scientific Research (B), grant 26284067).
Competing Interests
The author declares that he has no competing interests.
References
Anderson, John M. . (1987). The Limit of Linearity In: Anderson, John M., Durand, Jacques Jacques (eds.), Explorations in dependency phonology. Dordrecht: Foris, pp. 199.
Anderson, John M.; Ewen, Colin J. . (1987). Principles of dependency phonology. Cambridge: Cambridge University Press, DOI: http://dx.doi.org/10.1017/CBO9780511753442
Backley, Phillip . (2011). An introduction to Element Theory. Edinburgh: Edinburgh University Press.
Backley, Phillip; Nasukawa, Kuniya . (2009). Representing labials and velars: A single ‘dark’ element. Phonological Studies 12 : 3.
Bateman, Nicoleta . (2007). A cross linguistic investigation of palatalization. dissertation. San Diego, CA: The University of California San Diego.
Bhat, D. N. Shankara . (1978). A general study of palatalization In: Greenberg, Joseph H. (ed.), Universals of human language. Pal Alto, CA: Stanford University Press, pp. 47.
Fujimura, Osamu . (1996). The C/D model as a dynamic, non-segmental approach (ATR Technical Report TR-H-184). Tokyo: ATR Human Information Processing Research Laboratories.
Haraguchi, Shosuke . (2003). The phonology-phonetics interface and syllabic theory In: van de Weijer, Jeroen, van Heuven, Vincent J.; Vincent J. and van der Hulst, Harry Harry (eds.), The phonological spectrum. Amsterdam & Philadelphia: John Benjamins, pp. 31. DOI: http://dx.doi.org/10.1075/cilt.234.05har
Harris, John . (2005). Vowel reduction as information loss In: Carr, Philip, Durand, Jacques; Jacques and Ewen, Colin J. Colin J. (eds.), Headhood, elements, specification and contrastivity: Phonological papers in honour of John Anderson. Amsterdam & Philadelphia: John Benjamins, pp. 119. DOI: http://dx.doi.org/10.1075/cilt.259.10har
Harris, John; Lindsey, Geoff . (2000). Vowel patterns in mind and sound In: Burton-Roberts, Noel, Carr, Philip; Philip and Docherty, Gerry Gerry (eds.), Phonological knowledge: Conceptual and empirical issues. Oxford: Oxford University Press, pp. 185.
Jakobson, Roman; Fant, C. Gunnar M.; Halle, Morris . (1952). Preliminaries to speech analysis. Cambridge, MA: The MIT Press.
Jakobson, Roman; Halle, Morris . (1956). Fundamentals of language. The Hague: Mouton.
Nasukawa, Kuniya . (2005). A unified approach to nasality and voicing. Berlin & New York: Mouton de Gruyter, DOI: http://dx.doi.org/10.1515/9783110910490
Nasukawa, Kuniya . (2011). Representing phonology without precedence relations. English Linguistics 28 : 278. DOI: http://dx.doi.org/10.9793/elsj.28.2_278
Nasukawa, Kuniya . (2012). Recursion in intra-morphemic phonology. Paper presented at Workshop: Language and the Brain. The 9th International Conference on the Evolution of Language (Evolang IX). 13 March 2012, Kyoto, Japan
Nasukawa, Kuniya . (2014). Krämer, Martin, Ronai, Sandra-Iulia; Sandra-Iulia and Svenonius, Peter Peter (eds.), Features and recursive structure. Nordlyd 41 (1) : 1. (Special issue on Features edited by).
Nasukawa, Kuniya . (2015a). Why the palatal glide is not a consonantal segment in Japanese: an analysis in a dependency-based model of phonological primes In: Raimy, Eric, Cairns, Charles E. Charles E. (eds.), The segment in phonetics and phonology. Malden, MA: Wiley-Blackwell, pp. 180.
Nasukawa, Kuniya . (2015b). Recursion in the lexical structure of morphemes In: van Oostendorp, Marc, van Riemsdijk, Henk C. Henk C. (eds.), Representing structure in phonology and syntax. Berlin & Boston: Mouton de Gruyter, pp. 211. DOI: http://dx.doi.org/10.1515/9781501502224-009
Nasukawa, Kuniya; Backley, Phillip . (2005). Kula, Nancy C., van de Weijer, Jeroen Jeroen (eds.), Dependency relations in Element Theory: markedness and complexity. Proceedings of the Government Phonology Workshop. Special issue of Leiden Papers in Linguistics 2 (4) : 77.
Nasukawa, Kuniya; Backley, Phillip . (2008). Affrication as a performance device. Phonological Studies 11 : 35.
Nasukawa, Kuniya; Backley, Phillip . (2015). Heads and complements in phonology: a case of role reversal?. Phonological Studies 18 : 67.
Schane, Stanford A. . (1984). The fundamentals of particle phonology. Phonology Yearbook 1 : 129. DOI: http://dx.doi.org/10.1017/S0952675700000324
Schane, Stanford A. . (1995). Diphthongization in particle phonology In: Goldsmith, John A. (ed.), The handbook of phonological theory. Oxford: Blackwell, pp. 586.
Schane, Stanford A. . (2005). The aperture particle |a|: Its role and functions In: Carr, Philip, Durand, Jacques; Jacques and Ewen, Colin J. Colin J. (eds.), Headhood, elements, specification and contrastivity: Phonological papers in honour of John Anderson. Amsterdam & Philadelphia: John Benjamins, pp. 313.
Scheer, Tobias . (2004). A lateral theory of phonology: What is CVCV and why should it be?. Berlin & New York: Mouton de Gruyter, DOI: http://dx.doi.org/10.1515/9783110908336
Scheer, Tobias . (2008). Why the prosodic hierarchy is a diacritic and why the interface must be direct In: Hartmann, Jutta, Hegedüs, Veronika; Veronika and van Riemsdijk, Henk C. Henk C. (eds.), Sounds of silence: Empty elements in syntax and phonology. Amsterdam: Elsevier, pp. 145.
van der Hulst, Harry . (2010). A note on recursion in phonology In: van der Hulst, Harry (ed.), Recursion and human language. Berlin & New York: Mouton de Gruyter, pp. 301.
Yip, Moira . (1988). The Obligatory Contour Principle and phonological rules: A loss of identity. Linguistic Inquiry 19 : 65.
Yip, Moira . (1998). Identity avoidance in phonology and morphology In: Lapointe, Steven G., Brentari, Diane K.; Diane K. and Farrell, Patrick M. Patrick M. (eds.), Morphology and its relation to phonology and syntax. Stanford, CA: CSLI Publications, pp. 216.