Just how efficient is “good enough” for all intents and purposes? I wrote a script to simply list off all numbers that divide into an input, x, as pairs (i, n//i) and was just curious how efficient I should be going for? what is the acceptable rate at which the script starts to lose its efficiency? This is my code (although advice is appreciated, I just want to give an idea as to how it works).
import time
print('''This program determines all basic factors of a given number, x, as ordered pairs, (a, b) where ab=x.
Type "quit" to exit.''')
timer = input('Would you like a timer? (y/n) ')
while 1:
try:
x =(input('x = '))
T0 = time.time()
b = []
n = int(x)**0.5
ran = list(range(1, int(n)+1))
if int(x) % 2 == 1:
ran = ran[::2]
for i in ran:
if int(x) % i == 0:
m = (i, int(x)//i)
b.append(m)
else:
pass
except ValueError as error_1:
if x == 'quit':
break
else:
print(error_1)
except EOFError as error_2:
print(error_2)
except OverflowError as error_3:
print(error_3)
except MemoryError as error_4:
print(error_4)
T1 = time.time()
total = T1-T0
print(b)
print(str(len(b)) + ' pairs.')
if timer == 'y':
print("%.5f" % total + ' seconds.')
some of the results are:
x = 9
[(1, 9), (3, 3)]
2 pairs.
0.00000 seconds.
x = 8234324543
[(1, 8234324543)]
1 pairs.
0.07404 seconds.
x = 438756349875
[(1, 438756349875), (3, 146252116625), (5, 87751269975), (15, 29250423325), (25, 17550253995), (75, 5850084665), (125, 3510050799), (375, 1170016933), (557, 787713375), (1671, 262571125), (2785, 157542675), (8355, 52514225), (13925, 31508535), (41775, 10502845), (69625, 6301707), (208875, 2100569)]
16 pairs.
0.88859 seconds.
So this program can be pretty quick, but then the speed drops rather rapidly once you get up to higher numbers. Anyways, ya, I was just wondering what is considered acceptable by todays standards?
4
So this program can be pretty quick, but then the speed drops rather rapidly once you get up to higher numbers
This is normal. Doing the factorization of a large number is an highly time consuming tasks.
Indeed many cryptography algorithms (like the widely used RSA) rely on the difficulty of factoring large integers.
There is no absolute answer – performance is relative. Ultimately, the answer depends on the needs of the users.
If you’re only ever factoring small numbers, and assuming that you aren’t trying to do that millions of times a second, speed doesn’t have to be very good, especially if they are typing numbers into a console. Anything under a second is acceptable. If they are using a GUI, response times should generally be under 100ms or so to give the feeling that the results are “immediate”.
If you’re using the code to crunch millions of numbers, or numbers with hundreds of digits, performance is, of course, more important. However, if you’re doing that crunching over night, speed becomes less of a factor, assuming the work gets done before people need it.