Determine available GPU mem. with TensorFlow?

I need to know the amount of available memory on the present GPU (if there is one) to determine an optimal batch size for running a vector quantization (k-means) program. I have written a test program to determine the available memory, but occasionally I get an error from Python instead of a core dump. How can I reliably get information on the available memory for any system?

import tensorflow as tf
import numpy as np
from kmeanstf import KMeansTF
print("GPU Available: ", tf.test.is_gpu_available())

nn=1000
dd=250000
print("{:,d} bytes".format(nn*dd*4))
dic = {}
for x in "ABCD":
    dic[x]=tf.random.normal((nn,dd))
    print(x,dic[x][:1,:2])

print("done...")

You can use the tf.config.experimental.get_memory_info function to get information on the available memory for any system. Here’s an example of how to use it:

import tensorflow as tf
import numpy as np
from kmeanstf import KMeansTF

print("GPU Available: ", tf.test.is_gpu_available())

nn = 1000
dd = 250000
print("{:,d} bytes".format(nn*dd*4))

dic = {}
for x in "ABCD":
    dic[x] = tf.random.normal((nn, dd))
    print(x, dic[x][:1, :2])

print("Available GPU memory: ", tf.config.experimental.get_memory_info('GPU:0').available)
print("done...")