I have to write a program to search through a file containing lines and find lines that match to a degree of tolerance but are not necessarily the same. So for example the following lines would match:
[Concern] server=x814 process_manager=q5 trade_portal_3 is not responding
[Critical] server=y773 process_manager=q2 fix_portal_2 is not responding
The point is to identify groups of similar messages. However the processing code must be completely agnostic so a simple reg-exp such as the following would not fit the bill:
[w+] server=w+ process_manager=w+ w+ is not responding
I’m assuming that what I need is some kind of tokenising / scoring / diffing algorithm, but I’m not sure whether to attack this myself or whether there’s something available that will help me with this or whether another approach would be more appropriate.
I’ve tagged this question with C#, C++ and Python as I’m still open to how this is achieved, but I’m mostly interested in the approach. Thanks.
4
I was looking around and a simple looking method is the Jaro similarity.
Here’s my rough implementation in Python 3:
import math
def match(s1, s2):
set_of_matches = set.intersection(set(s1), set(s2))
return set_of_matches
def technical_match(s1, s2):
matches = match(s1, s2)
max_distance = math.floor(max(len(s1), len(s2)/2)) - 1
true_list = []
for i in matches:
distance = abs(s1.index(i) - s2.index(i))
if distance <= max_distance:
true_list.append(i)
return true_list
def diff_letters(a,b):
#note - this function comes from an SO answer and is not mine
return sum(a[i] != b[i] for i in range(len(a)))
def transpositions(s1, s2):
t = list(technical_match(s1, s2))
s1_list = []
s2_list = []
for i in s1:
if i in t:
s1_list.append(i)
for i in s2:
if i in t:
s2_list.append(i)
s1 = ''.join(s1_list)
s2 = ''.join(s2_list)
return diff_letters(s1, s2)
def jaro_similarity(s1, s2):
matches = technical_match(s1, s2)
if matches == 0:
return 0
else:
return 1/3*(matches/len(s1) + matches/len(s2) + (matches + transpositions(s1, s2))/matches)
To summarize, match()
turns the two strings into sets and finds their intersection for a list of matches. technical_match()
then follows the definition in the article – it looks at each item in the list of matches and checks the distance between their indices in each string, and checks if that distance is under the maximum distance, again as defined by the article. Then I return the length of the list of true matches.
Then we define the number of transpositions, again as in the article. This is mildly confusing, but my understanding was improved by the discussion on the article’s talk page.
Then I calculate the Jaro similarity as by the simple formula given in the article. From there, we can cycle through a mock file and use this to compare strings (note – this isn’t quite what you’re looking for; instead of finding groups it is matching to a pattern, but I’ll work on that more later):
match_text = open('foobar.txt', 'r').read().splitlines()
pattern = 'hat'
constant = .5
results = []
for i in match_text:
if jaro_similarity(i, pattern) > constant:
results.append(i)
print(results)
I’m still checking for bugs, but it’s looking good so far. Two strings that are exactly the same return a jaro similarity of 1.
You can mess with the code here. I asked a code review question on this that can be found here that has a fabulous answer improving my code.
There actually seems to be a python package that does exactly what you’re looking for.
See fuzzywuzzy
; thanks to Mathias Ettinger on Code Review for pointing this out! It also seems to have ports to some other languages like Java, Ruby, and C.