Package io.token

Class TokenStream

java.lang.Object
io.token.TokenStream

public class TokenStream extends Object
A handy structure to perform the lexical analysis on a given String. Uses the Scanner class and TokenReaders reading character by character to tokenize an input.
Since:
08.12.2021
Author:
Juyas
See Also:
  • Constructor Details

    • TokenStream

      public TokenStream(String input)
      Create a new TokenStream for a given input String. Also creates an empty token history list {@link this#getHistory()} and the Scanner with the single character delimiter.
      Parameters:
      input - the input String
  • Method Details

    • eat

      public TokenStream eat(TokenReader reader)
      Try to read a token using the given TokenReader based on the current position of the internal Scanner. This method will read and add a new token to the history, if the TokenReader.canRead(Scanner) method returns true.
      Parameters:
      reader - the reader to read the next token
      Returns:
      this stream
    • eatConditionally

      public TokenStream eatConditionally(TokenReader reader, Predicate<Token> previousToken)
      Try to read a token if the previous token in history matches the given condition.
      Parameters:
      reader - the reader to read the next token
      previousToken - the condition for the previous token
      Returns:
      this stream
    • eatHistorically

      public TokenStream eatHistorically(TokenReader reader, Predicate<LinkedList<Token>> previousToken)
      Try to read a token if all previous tokens in history match the given condition.
      Parameters:
      reader - the reader to read the next token
      previousToken - the conditions for the previous tokens
      Returns:
      this stream
    • getHistory

      public LinkedList<Token> getHistory()
      The current token history of this stream. Might produce the result at the end of calculation.
      Returns:
      the current token history list.
    • isOffering

      public boolean isOffering()
      If there is any token left to read. Is identical to Scanner.hasNext() for the internal scanner.
      Returns:
      whether there are characters/tokens left to be read
    • eh

      public static Predicate<LinkedList<Token>> eh(TokenReader... readers)
      Used for {@link this#eatHistorically(TokenReader, Predicate)} to create a predicate. Calling this method with A,B,C would result in a predicate that matches if the recent history ends with A,B,C. Where C is the most recent entry - therefore the given tokens get matched with the end of the history list.
      Parameters:
      readers - the TokenReaders containing the types to match in history
      Returns:
      whether the expected history is matched - as predicate
    • e

      public static Predicate<Token> e(TokenReader reader)
      Used for {@link this#eatConditionally(TokenReader, Predicate)} to create a predicate. The predicate will match the given readers type with the last token in the history list. This means the predicate passes, if the latest/most recent eaten token is equal to the given one.
      Parameters:
      reader - the reader containing the type to match in history
      Returns:
      whether the most recent token matches the given one - as predicate
    • ne

      public static Predicate<Token> ne(TokenReader reader)
      Used for {@link this#eatConditionally(TokenReader, Predicate)} to create a predicate. The predicate will NOT match the given readers type with the last token in the history list. This means the predicate passes, if the latest/most recent eaten token is NOT equal to the given one. This method is mostly the opposite to {@link this#e(TokenReader)}, although this predicate also passes, if there is no previous token at all.
      Parameters:
      reader - the reader containing the type to match in history
      Returns:
      whether the most recent token does not match the given one - as predicate