ABSTRACT:
Through series-line-up improvements Generation approaches to configuration, short text dialog (STC) becomes attractive. Classical sequence-line-line approaches Short text conversations often suffer from poor people Common answer without distinction. It’s hard Control the title or semantics of the selected record Many created candidates. On this sheet, a novel exterior A continuous learning approach from memory driven line has been proposed Face these problems. External memory is an audit Built to represent understandable topics or semantics. When Generation, given a controlled memory stimulus Input array, and an answer created using memory later on Trigger and serial-line sequence model. Experiments The proposed approach shows that they can create the most rich The difference between traditional sequencing-line sequence training Attention. Meanwhile, it achieves excellent quality in man Evaluation. This is done by manually manipulating it Memory trigger, headings can guide directly Answer or semantic explanation.
EXISTING SYSTEM:
With the widespread usage of social media, such as Twitter and microblogs, in recent years, more and more open domain conversation data becomes feasible, which makes data-driven approach for conversation possible. Short Text Conversation (STC) is a simplified conversation task: one round conversation formed by two short text sequences. It is widely used in conversation robot for chit-chat. The former one, usually given by human being, is referred to as a post, while the latter, given by computer, is referred to as a comment. The research on STC contributes to the development of open domain conversation system. There are two major frameworks for short text conversation: retrieval based methods and generation based methods. Retrieval based methods search the STC training corpus to find an existing comment which is most relevant to the post. Generation methods usually train a text generation model on the STC corpus and generate a comment using the model given a post. Compared to retrieval based methods, generation based methods can produce new comments that are not in the training set. This important feature makes generation methods very attractive.
DISADVANTAGAE:
1. Again and again user asks the same question for the admin.
2. Admin maintains the data base is difficultly. Training data set is update the very lengthy process.
3. Waste of time for in this process.
PROPOSED SYSTEM:
The encoder part encodes the variable length sequence into a fixed length vector. Then, the decoder part generates a variable length sequence from this vector word by word. Although this method successfully links variable length input and output into a single model, it suffers from vanishing gradient problem when the input is too long. In addition, a fixed length vector can not encode sufficient information when the input is long. Attention mechanisms have been proposed to tackle this problem. When generating the next word, the decoder can access all hidden vectors of the encoder. Then, the decoder network decides which segment of the input is more relevant to the current situation by computing a soft alignment. The alignment is a by product of the sequence-to-sequence training. The vector is then used, as an auxiliary feature, together with the post sentence embedding to be input to the decoder during training and generation. By enumerating different semantic keywords extracted from the post, it is possible to generate comments with rich diversity. Moreover, it is even possible to manually manipulated memory trigger process to introduce new semantics which does not exist in the post. In this work, we combine the advantages of and and propose a new sequence-to-sequence learning approach for STC. A tensor, in the form of a list of matrices, is constructed to represent the semantics of the comment sentences, referred to as external semantic memory. Each matrix represents all possible comment sentences corresponding to a specific semantic key. Each row vector of the matrix forms a sentence embedding basis and all row vectors span the whole comment semantic space of the specific semantic key. During generation, a semantic key is extracted from the input sequence and used to construct a comment sentence embedding from the external memory. The final comment is then generated using the embedding from external memory as well as the post sequence embedding with a sequence-to-sequence model. By manipulating the semantic keys, it is possible to interpretably guide the topics or the semantics of the comment.
ADVANTAGE:
1. Response the same question is avoided for the admin.
2. Easy for text classification is simply process.
3.Time consume is save for the admin.
SYSTEM REQUIREMENTS
SOFTWARE REQUIREMENTS:
• Programming Language : Python
• Font End Technologies : TKInter/Web(HTML,CSS,JS)
• IDE : Jupyter/Spyder/VS Code
• Operating System : Windows 08/10
HARDWARE REQUIREMENTS:
Processor : Core I3
RAM Capacity : 2 GB
Hard Disk : 250 GB
Monitor : 15″ Color
Mouse : 2 or 3 Button Mouse
Key Board : Windows 08/10