Internet users have more and more opportunities to review a variety of products (e.g. Amazon Reviews), services (e.g. MyHammer, jameda) and experiences (e.g. TripAdvisor). Users visit review sites to actively share their experiences with services such as hotel holidays, visits to the doctor or mail order experiences with interested other customers. For many consumers, these ratings are a helpful source of information for weighing up a personal purchase decision. The increasing flood of ratings and reviews on rating sites (e.g. ShopVote) and social media (e.g. qype, flickr), however, also confronts Internet users with the challenge of selecting the large number of rating comments and websites with regard to their relevance. These ratings often consist of free texts (so-called user-generated content), which can differ significantly in structure and content focus. In particular, if these free texts form the only basis for evaluation, an interpretation hurdle is emerging on the user side. If measurable user evaluations are available in the form of scales, they are often not always consistent with the freely formulated rating comments. While there are various software solutions that allow companies to automatically analyze the opinions of their customers (e.g. TrustYou) and thus follow trends, the user himself has no tool at hand that supports him in assessing the service quality of a company at first glance in millions of ratings.