Horacio Saggion and Guy Lapalme. Selective Analysis for Automatic Abstracting: Evaluating
Indicativeness and Acceptability. University of Montréal. (On line). Accessibility: http://www.iro.umontreal.ca/~saggion/evaluation2.pdf
Abstract
They have developed a new methodology for automatic abstracting of scientific and technical
articles called Selective Analysis. This methodology allows the generation of indicative informative abstracts integrating different types of information extracted from the source text.
The indicative part of the abstract identifies the topics of the document while the informative one
elaborates some topics according to the reader’s interest. The first evaluation of the methodology
demonstrates that Selective Analysis performs well in the task of signaling the topic of the
document demonstrating the viability of such a technique. The sentences the system produces
from instantiated templates are considered to be as acceptable as human produced sentences.
DUBLIN CORE ELEMENTS
D.C Title : Selective Analysis for Automatic Abstracting: Evaluating
Indicativeness and Acceptability
D.C Creator : Horacio Saggion and Guy Lapalme
D.C Subject : automatic abstracting, scientific and technical article, selective analysis, indicative informative
D.C Description : a new methodology for automatic abstracting of scientific and technical
articles called Selective Analysis.
D.C Publisher : Université de Montréal
D.C Contributor :
D.C Date :
D.C Type : these
D.C Format : PDF
D.C Identifier : http://www.iro.umontreal.ca/~saggion/evaluation2.pdf
D.C Source :
D.C Language : en
D.C Relation :
D.C Coverage :
D.C Rights : Horacio Saggion and Guy Lapalme
AUTOMATIC INDEXING
Tulic, Martin. Automatic indexing. 04.03.05 (on line). Accessibility: http://www.anindexer.com/about/auto/autoindex.html
The popularity of Internet search engines has caused many people think of the process of entering queries to retrieve documents from the Web based as automatic indexing. It is not.
Automatic indexing is the process of assigning and arranging index terms for natural-language texts without human intervention. For several decades, there have been many attempts to create such processes, driven both by the intellectual challenge and by the desire to significantly reduce the time and cost of producing indexes. Dozens if not hundreds of computer programs have been written to identify the words in a text and their location, and to alphabetize the words. Typically, definite and indefinite articles, prepositions and other words on a so-called stop list are not included in the program's output. Even some word processors provide this capability. Nevertheless, computer-generated results are often more like concordances (lists of words in a document) than truly usable indexes. There are several reasons for this.
The primary reason computers cannot automatically generate usable indexes is that, in indexing, abstraction is more important than alphabetization. Abstractions result from intellectual processes based on judgments about what to include and what to exclude. Computers are good at algorithmic processes such as alphabetization, but not good at inexplicable processes such as abstraction. Another reason is that headings in an index do not depend solely on terms used in the document; they also depend on terminology employed by intended users of the index and on their familiarity with the document. For example: in medical indexing, separate entries may need to be provided for brand names of drugs, chemical names, popular names and names used in other countries, even when certain of the names are not mentioned in the text. A third reason is that indexes should not contain headings for topics for which there is no information in the document. A typical document includes many terms signifying topics about which it contains no information. Computer programs include those terms in their results because they lack the intelligence required to distinguish terms signifying topics about which information is presented from terms about which no information is presented. A fourth reason is that headings and subheadings should be tailored to the needs and viewpoints of anticipated users. Some are aimed at users who are very knowledgeable about topics addressed in the document; others at users with little knowledge. Some are reminders to those who read the document already; others are enticements to potential readers. To date, no one has found a way to provide computer programs with the judgment, expertise, intelligence or audience awareness that is needed to create usable indexes. Until they do, automatic indexing will remain a pipe dream.
Although automated indexing is a pipe dream, computers are nevertheless an essential tool used by (but not a replacement for) indexers.
DUBLIN CORE ELEMENTS
D.C Title : Automatic indexing
D.C Creator : Tulic Martin
D.C Subject : index, computer program, indexing, abstraction
D.C Description : it presents the reasons why computers cannot automatically generate usable indexes.
D.C Publisher : Tulic Martin
D.C Contributor :
D.C Date : 04-03-05
D.C Type : article
D.C Format : HTML
D.C Identifier : http://www.anindexer.com/about/auto/autoindex.html
D.C Source :
D.C Language : en
D.C Relation :
D.C Coverage :
D.C Rights : Martin Tulic
The popularity of Internet search engines has caused many people think of the process of entering queries to retrieve documents from the Web based as automatic indexing. It is not.
Automatic indexing is the process of assigning and arranging index terms for natural-language texts without human intervention. For several decades, there have been many attempts to create such processes, driven both by the intellectual challenge and by the desire to significantly reduce the time and cost of producing indexes. Dozens if not hundreds of computer programs have been written to identify the words in a text and their location, and to alphabetize the words. Typically, definite and indefinite articles, prepositions and other words on a so-called stop list are not included in the program's output. Even some word processors provide this capability. Nevertheless, computer-generated results are often more like concordances (lists of words in a document) than truly usable indexes. There are several reasons for this.
The primary reason computers cannot automatically generate usable indexes is that, in indexing, abstraction is more important than alphabetization. Abstractions result from intellectual processes based on judgments about what to include and what to exclude. Computers are good at algorithmic processes such as alphabetization, but not good at inexplicable processes such as abstraction. Another reason is that headings in an index do not depend solely on terms used in the document; they also depend on terminology employed by intended users of the index and on their familiarity with the document. For example: in medical indexing, separate entries may need to be provided for brand names of drugs, chemical names, popular names and names used in other countries, even when certain of the names are not mentioned in the text. A third reason is that indexes should not contain headings for topics for which there is no information in the document. A typical document includes many terms signifying topics about which it contains no information. Computer programs include those terms in their results because they lack the intelligence required to distinguish terms signifying topics about which information is presented from terms about which no information is presented. A fourth reason is that headings and subheadings should be tailored to the needs and viewpoints of anticipated users. Some are aimed at users who are very knowledgeable about topics addressed in the document; others at users with little knowledge. Some are reminders to those who read the document already; others are enticements to potential readers. To date, no one has found a way to provide computer programs with the judgment, expertise, intelligence or audience awareness that is needed to create usable indexes. Until they do, automatic indexing will remain a pipe dream.
Although automated indexing is a pipe dream, computers are nevertheless an essential tool used by (but not a replacement for) indexers.
DUBLIN CORE ELEMENTS
D.C Title : Automatic indexing
D.C Creator : Tulic Martin
D.C Subject : index, computer program, indexing, abstraction
D.C Description : it presents the reasons why computers cannot automatically generate usable indexes.
D.C Publisher : Tulic Martin
D.C Contributor :
D.C Date : 04-03-05
D.C Type : article
D.C Format : HTML
D.C Identifier : http://www.anindexer.com/about/auto/autoindex.html
D.C Source :
D.C Language : en
D.C Relation :
D.C Coverage :
D.C Rights : Martin Tulic
AUTOMATIC INDEXING
BROWNE, Glenda. Automatic indexing. ANZI (Australian and
New Zealand Society of Indexers), 1996. (on line).
Accessibility: http://www.aussi.org/conferences/papers/browneg.htm
Introduction
This paper will examine developments in automatic indexing and abstracting in which the computer creates the index and abstract, with little or no human intervention. The emphasis is on practical applications, rather than theoretical studies. This paper does not cover computer-aided indexing, in which computers enhance the work of human indexers, or indexing of the Internet.
Research into automatic indexing and abstracting has been progressing since the late 1950's. Early reports claimed success, but practical applications have been limited. Computer indexing and abstracting are now being used commercially, with prospects for further use in the future. The history of automatic indexing and abstracting is well covered by Lancaster (1991).
Database indexing
Extraction indexing
The simplest method for indexing articles for bibliographic databases is extraction indexing, in which terms are extracted from the text of the article for inclusion in the index. The frequency of words in the article is determined, and the words which are found most often are included in the index. Alternatively, the words which occur most often in the article compared to their occurrence in the rest of the database, or in normal language, are included. This method can also take into account word stems (so that run and running are recognised as referring to the same concept), and can recognise phrases as well as single words.
Computer extraction indexing is more consistent than human extraction indexing. However, most human indexing is not simple extraction indexing, but is assignment indexing, in which the terms used in the index are not necessarily those found in the text.
Assignment indexing
For assignment indexing, the computer has a thesaurus, or controlled vocabulary, which lists all the subject headings which may be used in the index. For each of these subject headings it also has a list of profile words. These are words which, when found in the text of the article, indicate that the thesaurus term should be allocated.
For example, for the thesaurus term childbirth, the profile might include the words: childbirth, birth, labor, labour, delivery, forceps, baby, and born. As well as the profile, the computer also has criteria for inclusion -- instructions as to how often, and in what combination, the profile words must be present for that thesaurus term to be allocated.
The criteria might say, for example, that if the word childbirth is found ten times in an article, then the thesaurus term childbirth will be allocated. However if the word delivery is found ten times in an article, this in itself is not enough to warrant allocation of the term childbirth, as delivery could be referring to other subjects such as mail delivery. The criteria in this case would specify that the term delivery must occur a certain number of times, along with one or more of the other terms in the profile.
Computer database indexing in practice
In practice in database indexing, there is a continuum of use of computers, from no computer at all to fully automatic indexing.
• No computer.
• Computer clerical support, e.g. for data entry.
• Computer quality control, e.g. checking that all index terms are valid thesaurus terms.
• Computer intellectual assistance, e.g. helping with term choice and weighting.
• Automatic indexing (Hodge 1994).
Most database producers use computers at a number of different steps along this continuum. At the moment, however, automatic indexing is only ever used for a part of a database, for example, for a specific subject, access point, or document type.
Automatic indexing is used by the Defense Technology Information Center (DTIC) for the management-related literature in its database; it is used by FIZ Karlsruhe for indexing chemical names; it was used until 1992 by the Russian International Centre for Scientific and Technical Information (ICSTI) for its Russian language materials; and it was used by INSPEC for the re-indexing of its backfiles to new standards (Hodge 1994).
BIOSIS (Biological Abstracts) uses computers at all steps on the continuum, and uses automatic indexing in a number of areas. Title keywords are mapped by computer to the Semantic Vocabulary of 15,000 words; the terms from the Semantic Vocabulary are then mapped to one of 600 Concept Headings (that is, subject headings which describe the broad subject area of a document; Lancaster 1991).
The version of BIOSIS Previews available on the database host STN International uses automatic indexing to allocate Chemical Abstracts Service Registry Numbers to articles to describe the chemicals, drugs, enzymes and biosequences discussed in the article. The codes are allocated without human review, but a human operator spends five hours per month maintaining authority files and rules (Hodge 1994).
Retrieval and ranking tools (top)
There are two sides to the information retrieval process: documents must be indexed (by humans or computers) to describe their subject content; and documents must be retrieved using retrieval software and appropriate search statements.
Retrieval and ranking tools include those used with bibliographic databases, the 'indexes' used on the Internet, and personal computer software packages such as Personal Librarian (Koll 1993). Some programs, such as ISYS, are specialised for the fast retrieval of search words.
In theory these are complementary approaches, and both are needed for optimal retrieval. In practice, however, especially with documents in full-text databases, indexing is often omitted, and the retrieval software is relied on instead.
For these documents, which will not be indexed, it is important to ensure the best possible access. To accomplish this, the authors of the documents must be aware of the searching methods which will be used to retrieve them. Authors must use appropriate keywords throughout the text, and ensure that keywords are included in the title and section headings, as these are often given priority by retrieval and ranking tools (Sunter 1995).
The process whereby the creators of documents structure them to enhance retrieval is known as bottom-up indexing. A role for professional indexers in bottom-up indexing is as guides and trainers to document authors (Locke 1993).
One reason that automatic indexing may be unsuited to book indexing is that book indexes are not usually available electronically, and cannot be used in conjunction with powerful search software (Mulvany and Milstead 1994).
Document abstracting
Computers abstract documents (that is, condense their text) by searching for high frequency words in the text, and then selecting sentences in which clusters of these high frequency words occur. These sentences are then used in the order in which they appear in the text to make up the abstract. Flow can be improved by adding extra sentences (for example, if a sentence begins with 'Hence' or 'However' the previous sentence can be included as well) but the abstract remains an awkward collection of grammatically unrelated sentences.
To try and show the subject content, weighting can be given to sentences from certain locations in the document (e.g. the introduction) and to sentences containing cue words (e.g. 'finally', which suggests that a conclusion is starting). In addition, an organisation can give a weighting to words which are important to them: a footwear producer, for example, could require that every sentence containing the words foot or shoe should be included in the abstract.
Computer abstracting works best for documents which are written formally and consistently. It has been used with some success for generating case summaries from the text of legal decisions (Lancaster 1991).
After recent developments in natural language processing by computers, it is now possible for a computer to generate a grammatically correct abstract, in which sentences are modified without loss of meaning.
For example, from the following sentence:
"The need to generate enormous additional amounts of electric power while at the same time protecting the environment is one of the major social and technological problems that our society must solve in the next (sic!) future"
the computer generated the condensed sentence:
"The society must solve in the future the problem of the need to generate power while protecting the environment" (Lancaster 1991). Text summarisation experiments by British Telecom have resulted in useful, readable, abstracts (Farkas 1995).
Book indexing
There are a number of different types of microcomputer based software packages which are used for indexing.
The simplest are concordance generators, in which a list of the words found in the document, with the pages they are on, is generated. It is also possible to specify a list of words such that the concordance program will only include words from that list. This method was used to index drafts of the ISO999 indexing standard to help the committee members keep track of rules while the work was in progress (Shuter 1993).
Computer-aided indexing packages, such as Macrex and Cindex, are used by many professional indexers to enhance their work. They enable the indexer to view the index in alphabetical or page number order, can automatically produce various index styles, and save much typing.
Embedded indexing software is available with computer packages such as word processors, PageMaker, and Framemaker. With embedded indexing the document to be indexed is on disk, and the indexer inserts tags into the document to indicate which index terms should be allocated for that page. It does not matter if the document is then changed, as the index tags will move with the part of the document to which they refer. (So if twenty pages are added at the beginning of the document, all of the other text, including the index tags, will move 20 pages further on).
Disadvantages of embedded indexing are that it is time-consuming to do and awkward to edit (Mulvany 1994). Indexers who use embedded indexing often also use a program such as Macrex or Cindex to overcome these problems.
Embedded indexing is commonly used for documents such as computer software manuals which are published in many versions, and which allow very little time for the index to be created after the text has been finalised. With embedded indexing, indexing can start before the final page proofs are ready.
Embedded indexing will probably be used more in the future: for indexing works which are published in a number of formats; for indexing textbooks which are printed on request using only portions of the original textbook or using a combination of sources; and for indexing electronically published works which are continually adapted. In some of these applications the same person may do the work of the editor and indexer.
The most recent development in microcomputer book indexing software is Indexicon (Version 2), an automatic indexing package.
DUBLIN CORE ELEMENTS
D.C Title: Automatic indexing
D.C Creator : Browne Glenda
D.C Subject : automatic indexing, automatic abstracting, automatic summarizing, retrieval tools, information retrieval, Database indexing, Document abstracting, Book indexing
D.C Description: This paper examines developments in automatic indexing and abstracting in which the computer creates the index and abstract, with little or no human intervention. The emphasis is on practical applications, rather than theoretical studies. This paper does not cover computer-aided indexing, in which computers enhance the work of human indexers, or indexing of the Internet.
Research into automatic indexing and abstracting has been progressing since the late 1950's. Early reports claimed success, but practical applications have been limited. Computer indexing and abstracting are now being used commercially, with prospects for further use in the future. The history of automatic indexing and abstracting is well covered by Lancaster (1991).
D.C Publisher: ANZI, Australian and New Zealand Society of Indexers
D.C Contributor
D.C Date : 1996
D.C Type : Journal article
D.C Format : HTML
D.C Identifier : http://www.aussi.org/conferences/papers/browneg.htm
D.C Source
D.C Language : En
D.C Relation
D.C Coverage
D.C Rights: Glenda Browne
New Zealand Society of Indexers), 1996. (on line).
Accessibility: http://www.aussi.org/conferences/papers/browneg.htm
Introduction
This paper will examine developments in automatic indexing and abstracting in which the computer creates the index and abstract, with little or no human intervention. The emphasis is on practical applications, rather than theoretical studies. This paper does not cover computer-aided indexing, in which computers enhance the work of human indexers, or indexing of the Internet.
Research into automatic indexing and abstracting has been progressing since the late 1950's. Early reports claimed success, but practical applications have been limited. Computer indexing and abstracting are now being used commercially, with prospects for further use in the future. The history of automatic indexing and abstracting is well covered by Lancaster (1991).
Database indexing
Extraction indexing
The simplest method for indexing articles for bibliographic databases is extraction indexing, in which terms are extracted from the text of the article for inclusion in the index. The frequency of words in the article is determined, and the words which are found most often are included in the index. Alternatively, the words which occur most often in the article compared to their occurrence in the rest of the database, or in normal language, are included. This method can also take into account word stems (so that run and running are recognised as referring to the same concept), and can recognise phrases as well as single words.
Computer extraction indexing is more consistent than human extraction indexing. However, most human indexing is not simple extraction indexing, but is assignment indexing, in which the terms used in the index are not necessarily those found in the text.
Assignment indexing
For assignment indexing, the computer has a thesaurus, or controlled vocabulary, which lists all the subject headings which may be used in the index. For each of these subject headings it also has a list of profile words. These are words which, when found in the text of the article, indicate that the thesaurus term should be allocated.
For example, for the thesaurus term childbirth, the profile might include the words: childbirth, birth, labor, labour, delivery, forceps, baby, and born. As well as the profile, the computer also has criteria for inclusion -- instructions as to how often, and in what combination, the profile words must be present for that thesaurus term to be allocated.
The criteria might say, for example, that if the word childbirth is found ten times in an article, then the thesaurus term childbirth will be allocated. However if the word delivery is found ten times in an article, this in itself is not enough to warrant allocation of the term childbirth, as delivery could be referring to other subjects such as mail delivery. The criteria in this case would specify that the term delivery must occur a certain number of times, along with one or more of the other terms in the profile.
Computer database indexing in practice
In practice in database indexing, there is a continuum of use of computers, from no computer at all to fully automatic indexing.
• No computer.
• Computer clerical support, e.g. for data entry.
• Computer quality control, e.g. checking that all index terms are valid thesaurus terms.
• Computer intellectual assistance, e.g. helping with term choice and weighting.
• Automatic indexing (Hodge 1994).
Most database producers use computers at a number of different steps along this continuum. At the moment, however, automatic indexing is only ever used for a part of a database, for example, for a specific subject, access point, or document type.
Automatic indexing is used by the Defense Technology Information Center (DTIC) for the management-related literature in its database; it is used by FIZ Karlsruhe for indexing chemical names; it was used until 1992 by the Russian International Centre for Scientific and Technical Information (ICSTI) for its Russian language materials; and it was used by INSPEC for the re-indexing of its backfiles to new standards (Hodge 1994).
BIOSIS (Biological Abstracts) uses computers at all steps on the continuum, and uses automatic indexing in a number of areas. Title keywords are mapped by computer to the Semantic Vocabulary of 15,000 words; the terms from the Semantic Vocabulary are then mapped to one of 600 Concept Headings (that is, subject headings which describe the broad subject area of a document; Lancaster 1991).
The version of BIOSIS Previews available on the database host STN International uses automatic indexing to allocate Chemical Abstracts Service Registry Numbers to articles to describe the chemicals, drugs, enzymes and biosequences discussed in the article. The codes are allocated without human review, but a human operator spends five hours per month maintaining authority files and rules (Hodge 1994).
Retrieval and ranking tools (top)
There are two sides to the information retrieval process: documents must be indexed (by humans or computers) to describe their subject content; and documents must be retrieved using retrieval software and appropriate search statements.
Retrieval and ranking tools include those used with bibliographic databases, the 'indexes' used on the Internet, and personal computer software packages such as Personal Librarian (Koll 1993). Some programs, such as ISYS, are specialised for the fast retrieval of search words.
In theory these are complementary approaches, and both are needed for optimal retrieval. In practice, however, especially with documents in full-text databases, indexing is often omitted, and the retrieval software is relied on instead.
For these documents, which will not be indexed, it is important to ensure the best possible access. To accomplish this, the authors of the documents must be aware of the searching methods which will be used to retrieve them. Authors must use appropriate keywords throughout the text, and ensure that keywords are included in the title and section headings, as these are often given priority by retrieval and ranking tools (Sunter 1995).
The process whereby the creators of documents structure them to enhance retrieval is known as bottom-up indexing. A role for professional indexers in bottom-up indexing is as guides and trainers to document authors (Locke 1993).
One reason that automatic indexing may be unsuited to book indexing is that book indexes are not usually available electronically, and cannot be used in conjunction with powerful search software (Mulvany and Milstead 1994).
Document abstracting
Computers abstract documents (that is, condense their text) by searching for high frequency words in the text, and then selecting sentences in which clusters of these high frequency words occur. These sentences are then used in the order in which they appear in the text to make up the abstract. Flow can be improved by adding extra sentences (for example, if a sentence begins with 'Hence' or 'However' the previous sentence can be included as well) but the abstract remains an awkward collection of grammatically unrelated sentences.
To try and show the subject content, weighting can be given to sentences from certain locations in the document (e.g. the introduction) and to sentences containing cue words (e.g. 'finally', which suggests that a conclusion is starting). In addition, an organisation can give a weighting to words which are important to them: a footwear producer, for example, could require that every sentence containing the words foot or shoe should be included in the abstract.
Computer abstracting works best for documents which are written formally and consistently. It has been used with some success for generating case summaries from the text of legal decisions (Lancaster 1991).
After recent developments in natural language processing by computers, it is now possible for a computer to generate a grammatically correct abstract, in which sentences are modified without loss of meaning.
For example, from the following sentence:
"The need to generate enormous additional amounts of electric power while at the same time protecting the environment is one of the major social and technological problems that our society must solve in the next (sic!) future"
the computer generated the condensed sentence:
"The society must solve in the future the problem of the need to generate power while protecting the environment" (Lancaster 1991). Text summarisation experiments by British Telecom have resulted in useful, readable, abstracts (Farkas 1995).
Book indexing
There are a number of different types of microcomputer based software packages which are used for indexing.
The simplest are concordance generators, in which a list of the words found in the document, with the pages they are on, is generated. It is also possible to specify a list of words such that the concordance program will only include words from that list. This method was used to index drafts of the ISO999 indexing standard to help the committee members keep track of rules while the work was in progress (Shuter 1993).
Computer-aided indexing packages, such as Macrex and Cindex, are used by many professional indexers to enhance their work. They enable the indexer to view the index in alphabetical or page number order, can automatically produce various index styles, and save much typing.
Embedded indexing software is available with computer packages such as word processors, PageMaker, and Framemaker. With embedded indexing the document to be indexed is on disk, and the indexer inserts tags into the document to indicate which index terms should be allocated for that page. It does not matter if the document is then changed, as the index tags will move with the part of the document to which they refer. (So if twenty pages are added at the beginning of the document, all of the other text, including the index tags, will move 20 pages further on).
Disadvantages of embedded indexing are that it is time-consuming to do and awkward to edit (Mulvany 1994). Indexers who use embedded indexing often also use a program such as Macrex or Cindex to overcome these problems.
Embedded indexing is commonly used for documents such as computer software manuals which are published in many versions, and which allow very little time for the index to be created after the text has been finalised. With embedded indexing, indexing can start before the final page proofs are ready.
Embedded indexing will probably be used more in the future: for indexing works which are published in a number of formats; for indexing textbooks which are printed on request using only portions of the original textbook or using a combination of sources; and for indexing electronically published works which are continually adapted. In some of these applications the same person may do the work of the editor and indexer.
The most recent development in microcomputer book indexing software is Indexicon (Version 2), an automatic indexing package.
DUBLIN CORE ELEMENTS
D.C Title: Automatic indexing
D.C Creator : Browne Glenda
D.C Subject : automatic indexing, automatic abstracting, automatic summarizing, retrieval tools, information retrieval, Database indexing, Document abstracting, Book indexing
D.C Description: This paper examines developments in automatic indexing and abstracting in which the computer creates the index and abstract, with little or no human intervention. The emphasis is on practical applications, rather than theoretical studies. This paper does not cover computer-aided indexing, in which computers enhance the work of human indexers, or indexing of the Internet.
Research into automatic indexing and abstracting has been progressing since the late 1950's. Early reports claimed success, but practical applications have been limited. Computer indexing and abstracting are now being used commercially, with prospects for further use in the future. The history of automatic indexing and abstracting is well covered by Lancaster (1991).
D.C Publisher: ANZI, Australian and New Zealand Society of Indexers
D.C Contributor
D.C Date : 1996
D.C Type : Journal article
D.C Format : HTML
D.C Identifier : http://www.aussi.org/conferences/papers/browneg.htm
D.C Source
D.C Language : En
D.C Relation
D.C Coverage
D.C Rights: Glenda Browne
Inscription à :
Articles (Atom)