Base de dados : MEDLINE
Pesquisa : F02.463.593.524 [Categoria DeCS]
Referências encontradas : 1111 [refinar]
Mostrando: 1 .. 10   no formato [Detalhado]

página 1 de 112 ir para página                         

  1 / 1111 MEDLINE  
              next record last record
seleciona
para imprimir
Fotocópia
Texto completo
[PMID]:29050387
[Au] Autor:Bonilha L; Hillis AE; Hickok G; den Ouden DB; Rorden C; Fridriksson J
[Ad] Endereço:Department of Neurology, Medical University of South Carolina, Charleston, SC, USA.
[Ti] Título:Temporal lobe networks supporting the comprehension of spoken words.
[So] Source:Brain;140(9):2370-2380, 2017 Sep 01.
[Is] ISSN:1460-2156
[Cp] País de publicação:England
[La] Idioma:eng
[Ab] Resumo:Auditory word comprehension is a cognitive process that involves the transformation of auditory signals into abstract concepts. Traditional lesion-based studies of stroke survivors with aphasia have suggested that neocortical regions adjacent to auditory cortex are primarily responsible for word comprehension. However, recent primary progressive aphasia and normal neurophysiological studies have challenged this concept, suggesting that the left temporal pole is crucial for word comprehension. Due to its vasculature, the temporal pole is not commonly completely lesioned in stroke survivors and this heterogeneity may have prevented its identification in lesion-based studies of auditory comprehension. We aimed to resolve this controversy using a combined voxel-based-and structural connectome-lesion symptom mapping approach, since cortical dysfunction after stroke can arise from cortical damage or from white matter disconnection. Magnetic resonance imaging (T1-weighted and diffusion tensor imaging-based structural connectome), auditory word comprehension and object recognition tests were obtained from 67 chronic left hemisphere stroke survivors. We observed that damage to the inferior temporal gyrus, to the fusiform gyrus and to a white matter network including the left posterior temporal region and its connections to the middle temporal gyrus, inferior temporal gyrus, and cingulate cortex, was associated with word comprehension difficulties after factoring out object recognition. These results suggest that the posterior lateral and inferior temporal regions are crucial for word comprehension, serving as a hub to integrate auditory and conceptual processing. Early processing linking auditory words to concepts is situated in posterior lateral temporal regions, whereas additional and deeper levels of semantic processing likely require more anterior temporal regions.10.1093/brain/awx169_video1awx169media15555638084001.
[Mh] Termos MeSH primário: Compreensão
Conectoma
Percepção da Fala/fisiologia
Fala
Acidente Vascular Cerebral/patologia
Lobo Temporal/patologia
[Mh] Termos MeSH secundário: Estimulação Acústica
Mapeamento Encefálico
Imagem de Tensor de Difusão
Feminino
Seres Humanos
Imagem por Ressonância Magnética
Masculino
Meia-Idade
Testes Neuropsicológicos
Reconhecimento Fisiológico de Modelo
Substância Branca/patologia
[Pt] Tipo de publicação:JOURNAL ARTICLE
[Em] Mês de entrada:1710
[Cu] Atualização por classe:171030
[Lr] Data última revisão:
171030
[Sb] Subgrupo de revista:AIM; IM
[Da] Data de entrada para processamento:171021
[St] Status:MEDLINE
[do] DOI:10.1093/brain/awx169


  2 / 1111 MEDLINE  
              first record previous record next record last record
seleciona
para imprimir
Fotocópia
Texto completo
[PMID]:29040288
[Au] Autor:Febres G; Jaffe K
[Ad] Endereço:Departamento de Procesos y Sistemas, Universidad Simón Bolívar, Sartenejas, Baruta, Miranda, Venezuela.
[Ti] Título:Music viewed by its entropy content: A novel window for comparative analysis.
[So] Source:PLoS One;12(10):e0185757, 2017.
[Is] ISSN:1932-6203
[Cp] País de publicação:United States
[La] Idioma:eng
[Ab] Resumo:Polyphonic music files were analyzed using the set of symbols that produced the Minimal Entropy Description, which we call the Fundamental Scale. This allowed us to create a novel space to represent music pieces by developing: (a) a method to adjust a textual description from its original scale of observation to an arbitrarily selected scale, (b) a method to model the structure of any textual description based on the shape of the symbol frequency profiles, and (c) the concept of higher order entropy as the entropy associated with the deviations of a frequency-ranked symbol profile from a perfect Zipfian profile. We call this diversity index the '2nd Order Entropy'. Applying these methods to a variety of musical pieces showed how the space of 'symbolic specific diversity-entropy' and that of '2nd order entropy' captures characteristics that are unique to each music type, style, composer and genre. Some clustering of these properties around each musical category is shown. These methods allow us to visualize a historic trajectory of academic music across this space, from medieval to contemporary academic music. We show that the description of musical structures using entropy, symbol frequency profiles and specific symbolic diversity allows us to characterize traditional and popular expressions of music. These classification techniques promise to be useful in other disciplines for pattern recognition and machine learning.
[Mh] Termos MeSH primário: Música/psicologia
Processamento de Linguagem Natural
Reconhecimento Automatizado de Padrão
Reconhecimento Fisiológico de Modelo/fisiologia
[Mh] Termos MeSH secundário: Estimulação Acústica
Entropia
Seres Humanos
Armazenamento e Recuperação da Informação
Cadeias de Markov
[Pt] Tipo de publicação:COMPARATIVE STUDY; JOURNAL ARTICLE
[Em] Mês de entrada:1710
[Cu] Atualização por classe:171031
[Lr] Data última revisão:
171031
[Sb] Subgrupo de revista:IM
[Da] Data de entrada para processamento:171018
[St] Status:MEDLINE
[do] DOI:10.1371/journal.pone.0185757


  3 / 1111 MEDLINE  
              first record previous record next record last record
seleciona
para imprimir
Fotocópia
Texto completo
[PMID]:28973562
[Au] Autor:Schmidt F; Hegele M; Fleming RW
[Ad] Endereço:Department of Psychology, Justus-Liebig-University Giessen, Giessen, Germany.
[Ti] Título:Perceiving animacy from shape.
[So] Source:J Vis;17(11):10, 2017 Sep 01.
[Is] ISSN:1534-7362
[Cp] País de publicação:United States
[La] Idioma:eng
[Ab] Resumo:Superordinate visual classification-for example, identifying an image as "animal," "plant," or "mineral"-is computationally challenging because radically different items (e.g., "octopus," "dog") must be grouped into a common class ("animal"). It is plausible that learning superordinate categories teaches us not only the membership of particular (familiar) items, but also general features that are shared across class members, aiding us in classifying novel (unfamiliar) items. Here, we investigated visual shape features associated with animate and inanimate classes. One group of participants viewed images of 75 unfamiliar and atypical items and provided separate ratings of how much each image looked like an animal, plant, and mineral. Results show systematic tradeoffs between the ratings, indicating a class-like organization of items. A second group rated each image in terms of 22 midlevel shape features (e.g., "symmetrical," "curved"). The results confirm that superordinate classes are associated with particular shape features (e.g., "animals" generally have high "symmetry" ratings). Moreover, linear discriminant analysis based on the 22-D feature vectors predicts the perceived classes approximately as well as the ground truth classification. This suggests that a generic set of midlevel visual shape features forms the basis for superordinate classification of novel objects along the animacy continuum.
[Mh] Termos MeSH primário: Percepção de Forma/fisiologia
Reconhecimento Fisiológico de Modelo/fisiologia
Reconhecimento Visual de Modelos/fisiologia
Córtex Visual/fisiologia
[Mh] Termos MeSH secundário: Animais
Seres Humanos
Testes Neuropsicológicos
Estimulação Luminosa
Tempo de Reação
Adulto Jovem
[Pt] Tipo de publicação:JOURNAL ARTICLE
[Em] Mês de entrada:1710
[Cu] Atualização por classe:171018
[Lr] Data última revisão:
171018
[Sb] Subgrupo de revista:IM
[Da] Data de entrada para processamento:171004
[St] Status:MEDLINE
[do] DOI:10.1167/17.11.10


  4 / 1111 MEDLINE  
              first record previous record next record last record
seleciona
para imprimir
Fotocópia
Texto completo
[PMID]:28841641
[Au] Autor:Bejjanki VR; da Silveira RA; Cohen JD; Turk-Browne NB
[Ad] Endereço:Department of Psychology, Princeton University, Princeton, NJ, United States of America.
[Ti] Título:Noise correlations in the human brain and their impact on pattern classification.
[So] Source:PLoS Comput Biol;13(8):e1005674, 2017 Aug.
[Is] ISSN:1553-7358
[Cp] País de publicação:United States
[La] Idioma:eng
[Ab] Resumo:Multivariate decoding methods, such as multivoxel pattern analysis (MVPA), are highly effective at extracting information from brain imaging data. Yet, the precise nature of the information that MVPA draws upon remains controversial. Most current theories emphasize the enhanced sensitivity imparted by aggregating across voxels that have mixed and weak selectivity. However, beyond the selectivity of individual voxels, neural variability is correlated across voxels, and such noise correlations may contribute importantly to accurate decoding. Indeed, a recent computational theory proposed that noise correlations enhance multivariate decoding from heterogeneous neural populations. Here we extend this theory from the scale of neurons to functional magnetic resonance imaging (fMRI) and show that noise correlations between heterogeneous populations of voxels (i.e., voxels selective for different stimulus variables) contribute to the success of MVPA. Specifically, decoding performance is enhanced when voxels with high vs. low noise correlations (measured during rest or in the background of the task) are selected during classifier training. Conversely, voxels that are strongly selective for one class in a GLM or that receive high classification weights in MVPA tend to exhibit high noise correlations with voxels selective for the other class being discriminated against. Furthermore, we use simulations to show that this is a general property of fMRI data and that selectivity and noise correlations can have distinguishable influences on decoding. Taken together, our findings demonstrate that if there is signal in the data, the resulting above-chance classification accuracy is modulated by the magnitude of noise correlations.
[Mh] Termos MeSH primário: Encéfalo/fisiologia
Imagem por Ressonância Magnética/métodos
Modelos Neurológicos
Neurônios/fisiologia
Reconhecimento Fisiológico de Modelo/fisiologia
[Mh] Termos MeSH secundário: Adulto
Algoritmos
Atenção/fisiologia
Seres Humanos
[Pt] Tipo de publicação:JOURNAL ARTICLE
[Em] Mês de entrada:1709
[Cu] Atualização por classe:170918
[Lr] Data última revisão:
170918
[Sb] Subgrupo de revista:IM
[Da] Data de entrada para processamento:170826
[St] Status:MEDLINE
[do] DOI:10.1371/journal.pcbi.1005674


  5 / 1111 MEDLINE  
              first record previous record next record last record
seleciona
para imprimir
Fotocópia
Texto completo
[PMID]:28821680
[Au] Autor:Puvvada KC; Simon JZ
[Ad] Endereço:Department of Electrical & Computer Engineering.
[Ti] Título:Cortical Representations of Speech in a Multitalker Auditory Scene.
[So] Source:J Neurosci;37(38):9189-9196, 2017 Sep 20.
[Is] ISSN:1529-2401
[Cp] País de publicação:United States
[La] Idioma:eng
[Ab] Resumo:The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex. Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory scene, with both attended and unattended speech streams represented with almost equal fidelity. We also show that higher-order auditory cortical areas, by contrast, represent an attended speech stream separately from, and with significantly higher fidelity than, unattended speech streams. Furthermore, the unattended background streams are represented as a single undivided background object rather than as distinct background objects.
[Mh] Termos MeSH primário: Córtex Auditivo/fisiologia
Rede Nervosa/fisiologia
Reconhecimento Fisiológico de Modelo/fisiologia
Percepção da Fala/fisiologia
Fala/fisiologia
[Mh] Termos MeSH secundário: Estimulação Acústica/métodos
Sinais (Psicologia)
Feminino
Seres Humanos
Masculino
Adulto Jovem
[Pt] Tipo de publicação:JOURNAL ARTICLE
[Em] Mês de entrada:1710
[Cu] Atualização por classe:171008
[Lr] Data última revisão:
171008
[Sb] Subgrupo de revista:IM
[Da] Data de entrada para processamento:170820
[St] Status:MEDLINE
[do] DOI:10.1523/JNEUROSCI.0938-17.2017


  6 / 1111 MEDLINE  
              first record previous record next record last record
seleciona
para imprimir
Fotocópia
Texto completo
[PMID]:28659431
[Au] Autor:Lafaille-Magnan ME; Poirier J; Etienne P; Tremblay-Mercier J; Frenette J; Rosa-Neto P; Breitner JCS; PREVENT-AD Research Group
[Ad] Endereço:From the Centre for Studies on Prevention of AD (M.-E.L.-M., J.P., P.E., J.T.-M., J.F., P.R.-N., J.C.S.B.) and McGill Centre for Studies in Aging (P.R.-N.), Douglas Mental Health University Institute, McGill University, Faculty of Medicine, Montreal, Quebec, Canada. Marie-Elyse.Lafaille-Magnan@mail.
[Ti] Título:Odor identification as a biomarker of preclinical AD in older adults at risk.
[So] Source:Neurology;89(4):327-335, 2017 Jul 25.
[Is] ISSN:1526-632X
[Cp] País de publicação:United States
[La] Idioma:eng
[Ab] Resumo:OBJECTIVE: To assess odor identification (OI) as an indicator of presymptomatic Alzheimer disease (AD) pathogenesis in cognitively normal aging individuals at increased risk of AD dementia. METHODS: In 274 members of the PREVENT-AD cohort of healthy aging persons with a parental or multiple-sibling history of AD dementia, we assessed the cross-sectional association of OI with potential indicators of presymptomatic AD. Some 101 participants donated CSF, thus enabling assessment of AD pathology with the biomarkers total tau (t-tau), phospho-tau (P -tau), and their ratios with ß-amyloid (Aß ). Adjusted analyses considered age, cognition, ε4 status, education, and sex as covariates. We measured OI using the University of Pennsylvania Smell Identification Test and cognitive performance using the Repeatable Battery for Assessment of Neuropsychological Status. Standard kits provided assays of the AD biomarkers. Analyses used robust-fit linear regression models. RESULTS: Reduced OI was associated with lower cognitive score and older age, as well as increased ratios of CSF t-tau and P -tau to Aß (all < 0.02). However, the observed associations of OI with age and cognition were unapparent in adjusted models that restricted observations to CSF donors and included AD biomarkers. OI showed little association with CSF Aß alone except in ε4 carriers having lowest-quartile Aß levels. CONCLUSIONS: These findings from healthy high-risk older individuals suggest that OI reflects degree of preclinical AD pathology, while its relationships with age and cognition result from the association of these latter variables with such pathology. Diminished OI may be a practical and affordable biomarker of AD pathology.
[Mh] Termos MeSH primário: Doença de Alzheimer/diagnóstico
Percepção Olfatória
Reconhecimento Fisiológico de Modelo
Recognição (Psicologia)
[Mh] Termos MeSH secundário: Idoso
Idoso de 80 Anos ou mais
Envelhecimento/fisiologia
Envelhecimento/psicologia
Doença de Alzheimer/líquido cefalorraquidiano
Doença de Alzheimer/genética
Doença de Alzheimer/fisiopatologia
Peptídeos beta-Amiloides/líquido cefalorraquidiano
Apolipoproteína E4/genética
Biomarcadores/líquido cefalorraquidiano
Cognição
Estudos de Coortes
Estudos Transversais
Feminino
Seres Humanos
Masculino
Meia-Idade
Fragmentos de Peptídeos/líquido cefalorraquidiano
Fosforilação
Sintomas Prodrômicos
Risco
Proteínas tau/líquido cefalorraquidiano
[Pt] Tipo de publicação:JOURNAL ARTICLE
[Nm] Nome de substância:
0 (Amyloid beta-Peptides); 0 (Apolipoprotein E4); 0 (Biomarkers); 0 (MAPT protein, human); 0 (Peptide Fragments); 0 (amyloid beta-protein (1-42)); 0 (tau Proteins)
[Em] Mês de entrada:1707
[Cu] Atualização por classe:170906
[Lr] Data última revisão:
170906
[Sb] Subgrupo de revista:AIM; IM
[Da] Data de entrada para processamento:170630
[St] Status:MEDLINE
[do] DOI:10.1212/WNL.0000000000004159


  7 / 1111 MEDLINE  
              first record previous record next record last record
seleciona
para imprimir
Fotocópia
Texto completo
[PMID]:28394938
[Au] Autor:Arnold D; Tomaschek F; Sering K; Lopez F; Baayen RH
[Ad] Endereço:Quantitative Linguistics, Seminar für Sprachwissenschaft, Eberhard Karls Universität Tübingen, Tübingen, Germany.
[Ti] Título:Words from spontaneous conversational speech can be recognized with human-like accuracy by an error-driven learning algorithm that discriminates between meanings straight from smart acoustic features, bypassing the phoneme as recognition unit.
[So] Source:PLoS One;12(4):e0174623, 2017.
[Is] ISSN:1932-6203
[Cp] País de publicação:United States
[La] Idioma:eng
[Ab] Resumo:Sound units play a pivotal role in cognitive models of auditory comprehension. The general consensus is that during perception listeners break down speech into auditory words and subsequently phones. Indeed, cognitive speech recognition is typically taken to be computationally intractable without phones. Here we present a computational model trained on 20 hours of conversational speech that recognizes word meanings within the range of human performance (model 25%, native speakers 20-44%), without making use of phone or word form representations. Our model also generates successfully predictions about the speed and accuracy of human auditory comprehension. At the heart of the model is a 'wide' yet sparse two-layer artificial neural network with some hundred thousand input units representing summaries of changes in acoustic frequency bands, and proxies for lexical meanings as output units. We believe that our model holds promise for resolving longstanding theoretical problems surrounding the notion of the phone in linguistic theory.
[Mh] Termos MeSH primário: Algoritmos
Simulação por Computador
Fala
[Mh] Termos MeSH secundário: Compreensão
Feminino
Seres Humanos
Masculino
Reconhecimento Fisiológico de Modelo
Fonética
Recognição (Psicologia)
Espectrografia do Som
Acústica da Fala
Percepção da Fala
Interface para o Reconhecimento da Fala
Adulto Jovem
[Pt] Tipo de publicação:JOURNAL ARTICLE
[Em] Mês de entrada:1708
[Cu] Atualização por classe:170830
[Lr] Data última revisão:
170830
[Sb] Subgrupo de revista:IM
[Da] Data de entrada para processamento:170411
[St] Status:MEDLINE
[do] DOI:10.1371/journal.pone.0174623


  8 / 1111 MEDLINE  
              first record previous record next record last record
seleciona
para imprimir
Fotocópia
Texto completo
[PMID]:28384805
[Au] Autor:Moberly AC; Harris MS; Boyce L; Nittrouer S
[Ad] Endereço:Department of Otolaryngology-Head and Neck Surgery, Wexner Medical Center, The Ohio State University, Columbus.
[Ti] Título:Speech Recognition in Adults With Cochlear Implants: The Effects of Working Memory, Phonological Sensitivity, and Aging.
[So] Source:J Speech Lang Hear Res;60(4):1046-1061, 2017 04 14.
[Is] ISSN:1558-9102
[Cp] País de publicação:United States
[La] Idioma:eng
[Ab] Resumo:Purpose: Models of speech recognition suggest that "top-down" linguistic and cognitive functions, such as use of phonotactic constraints and working memory, facilitate recognition under conditions of degradation, such as in noise. The question addressed in this study was what happens to these functions when a listener who has experienced years of hearing loss obtains a cochlear implant. Method: Thirty adults with cochlear implants and 30 age-matched controls with age-normal hearing underwent testing of verbal working memory using digit span and serial recall of words. Phonological capacities were assessed using a lexical decision task and nonword repetition. Recognition of words in sentences in speech-shaped noise was measured. Results: Implant users had only slightly poorer working memory accuracy than did controls and only on serial recall of words; however, phonological sensitivity was highly impaired. Working memory did not facilitate speech recognition in noise for either group. Phonological sensitivity predicted sentence recognition for implant users but not for listeners with normal hearing. Conclusion: Clinical speech recognition outcomes for adult implant users relate to the ability of these users to process phonological information. Results suggest that phonological capacities may serve as potential clinical targets through rehabilitative training. Such novel interventions may be particularly helpful for older adult implant users.
[Mh] Termos MeSH primário: Envelhecimento/psicologia
Implantes Cocleares
Memória de Curto Prazo
Reconhecimento Fisiológico de Modelo
Fonética
Percepção da Fala
[Mh] Termos MeSH secundário: Idoso
Idoso de 80 Anos ou mais
Tomada de Decisões
Feminino
Perda Auditiva/psicologia
Perda Auditiva/reabilitação
Seres Humanos
Testes de Linguagem
Masculino
Rememoração Mental
Entrevista Psiquiátrica Padronizada
Meia-Idade
Testes Neuropsicológicos
Tempo de Reação
Recognição (Psicologia)
Reprodutibilidade dos Testes
[Pt] Tipo de publicação:JOURNAL ARTICLE; RESEARCH SUPPORT, NON-U.S. GOV'T; RESEARCH SUPPORT, N.I.H., EXTRAMURAL
[Em] Mês de entrada:1707
[Cu] Atualização por classe:171113
[Lr] Data última revisão:
171113
[Sb] Subgrupo de revista:IM
[Da] Data de entrada para processamento:170407
[St] Status:MEDLINE
[do] DOI:10.1044/2016_JSLHR-H-16-0119


  9 / 1111 MEDLINE  
              first record previous record next record last record
seleciona
para imprimir
Fotocópia
Texto completo
[PMID]:28384727
[Au] Autor:Harwood V; Preston J; Grela B; Roy D; Harold O; Turcios J; Andrada K; Landi N
[Ad] Endereço:University of Connecticut, StorrsHaskins Laboratories, New Haven, CT.
[Ti] Título:Electrophysiology of Perception and Processing of Phonological Information as Indices of Toddlers' Language Performance.
[So] Source:J Speech Lang Hear Res;60(4):999-1011, 2017 04 14.
[Is] ISSN:1558-9102
[Cp] País de publicação:United States
[La] Idioma:eng
[Ab] Resumo:Purpose: The toddler years are a critical period for language development and growth. We investigated how event-related potentials (ERPs) to repeated and novel nonwords are associated with clinical assessments of language in young children. In addition, nonword repetition (NWR) was used to measure phonological working memory to determine the unique and collective contribution of ERP measures of phonemic discrimination and NWR as predictors of language ability. Method: Forty children between the ages of 24-48 months participated in an ERP experiment to determine phonemic discrimination to repeated and novel nonwords in an old/new design. Participants also completed a NWR task to explore the contribution of phonological working memory in predicting language. Results: ERP analyses revealed that faster responses to novel stimuli correlated with higher language performance on clinical assessments of language. Regression analyses revealed that an earlier component was associated with lower level phonemic sensitivity, and a later component was indexing phonological working memory skills similar to NWR. Conclusion: Our findings suggest that passive ERP responses indexing phonological discrimination and phonological working memory are strongly related to behavioral measures of language.
[Mh] Termos MeSH primário: Encéfalo/fisiologia
Linguagem Infantil
Discriminação (Psicologia)/fisiologia
Reconhecimento Fisiológico de Modelo/fisiologia
Fonética
Percepção da Fala/fisiologia
[Mh] Termos MeSH secundário: Pré-Escolar
Eletroencefalografia
Potenciais Evocados
Feminino
Seres Humanos
Testes de Linguagem
Masculino
Memória de Curto Prazo
Tempo de Reação
Análise de Regressão
Processamento de Sinais Assistido por Computador
[Pt] Tipo de publicação:JOURNAL ARTICLE; RESEARCH SUPPORT, NON-U.S. GOV'T
[Em] Mês de entrada:1707
[Cu] Atualização por classe:171113
[Lr] Data última revisão:
171113
[Sb] Subgrupo de revista:IM
[Da] Data de entrada para processamento:170407
[St] Status:MEDLINE
[do] DOI:10.1044/2016_JSLHR-L-15-0437


  10 / 1111 MEDLINE  
              first record previous record
seleciona
para imprimir
Fotocópia
Texto completo
[PMID]:28361660
[Au] Autor:Shigeno S
[Ad] Endereço:1 Department of Psychology, College of Education, Psychology and Human Studies, Aoyama Gakuin University, Tokyo, Japan.
[Ti] Título:Effects of Auditory and Visual Priming on the Identification of Spoken Words.
[So] Source:Percept Mot Skills;124(2):549-563, 2017 Apr.
[Is] ISSN:1558-688X
[Cp] País de publicação:United States
[La] Idioma:eng
[Ab] Resumo:This study examined the effects of preceding contextual stimuli, either auditory or visual, on the identification of spoken target words. Fifty-one participants (29% males, 71% females; mean age = 24.5 years, SD = 8.5) were divided into three groups: no context, auditory context, and visual context. All target stimuli were spoken words masked with white noise. The relationships between the context and target stimuli were as follows: identical word, similar word, and unrelated word. Participants presented with context experienced a sequence of six context stimuli in the form of either spoken words or photographs. Auditory and visual context conditions produced similar results, but the auditory context aided word identification more than the visual context in the similar word relationship. We discuss these results in the light of top-down processing, motor theory, and the phonological system of language.
[Mh] Termos MeSH primário: Reconhecimento Fisiológico de Modelo/fisiologia
Desempenho Psicomotor/fisiologia
Priming de Repetição/fisiologia
Percepção da Fala/fisiologia
[Mh] Termos MeSH secundário: Adolescente
Adulto
Feminino
Seres Humanos
Masculino
Reconhecimento Visual de Modelos/fisiologia
Adulto Jovem
[Pt] Tipo de publicação:JOURNAL ARTICLE
[Em] Mês de entrada:1705
[Cu] Atualização por classe:170529
[Lr] Data última revisão:
170529
[Sb] Subgrupo de revista:IM
[Da] Data de entrada para processamento:170401
[St] Status:MEDLINE
[do] DOI:10.1177/0031512516684459



página 1 de 112 ir para página                         
   


Refinar a pesquisa
  Base de dados : MEDLINE Formulário avançado   

    Pesquisar no campo  
1  
2
3
 
           



Search engine: iAH v2.6 powered by WWWISIS

BIREME/OPAS/OMS - Centro Latino-Americano e do Caribe de Informação em Ciências da Saúde