Metal Haiku Generator¶
Objective: The primary objective of this notebook is to train a neural network on a dataset of metal lyrics and then leverage the trained model to generate haikus that encapsulate the intense and emotive essence of metal music within the concise form of a haiku.
#Importing Packages
from tensorflow.keras.layers import Input, SimpleRNN, LSTM, GRU, Conv1D, Embedding, Dense, Bidirectional, Dropout
from tensorflow.keras.preprocessing.sequence import pad_sequences
from sklearn.feature_extraction.text import CountVectorizer
from scipy.spatial.distance import pdist, squareform
from tensorflow.keras import Model
import matplotlib.pyplot as plt
import tensorflow as tf
import random as rand
import numpy as np
import pronouncing
import markovify
import textstat
import math
import re
import syllables
Load lyric dataset scraped from scraping.ipynb¶
The cornerstone of this project is the dataset of metal lyrics, procured from the authoritative repository of metal music, metal-archives.com. This section elucidates the approach taken to acquire and preprocess the dataset.
Scraping Process: The scraping process was executed using the Beautiful Soup Python library within the Jupyter Notebook titled "scraping.ipynb." This notebook interacted with the metal-archives.com website, extracting lyrics exclusively from bands based in the United States. This selection criterion aimed to primarily retrieve lyrics written in English, ensuring linguistic consistency for the subsequent text generation phase.
Tokenization and Preprocessing: Post-scraping, the dataset underwent meticulous preprocessing to ensure optimum quality and uniformity. The preprocessing steps included:
Tokenization: The scraped lyrics were tokenized into individual words. This process transformed the continuous text into a sequence of discrete tokens, allowing the neural network to comprehend and learn the underlying language patterns within the lyrics.
Removal of Non-Alphabetical Elements: Non-alphabetical elements such as special characters and symbols unrelated to language were systematically removed. This cleaning step aimed to refine the text corpus and eliminate extraneous noise that could hinder subsequent analyses.
lyric_path = '/Users/patricknaylor/Desktop/Metal/Data/lyrics_1.txt'
with open(lyric_path, 'r') as file:
song = (file.read())
lyrics = song.replace('\ufeff', '').split("\n")
for line in lyrics:
line = re.sub(r'[^a-zA-Z]', '', line)
line = re.sub(r'x2', '', line)
#print(lyrics)
Create markov model to generate seed sentences¶
markov_model = markovify.NewlineText(str("\n".join(lyrics)), well_formed=False, state_size=3)
Tokenize lyric database¶
sequences = lyrics
tokenizer = tf.keras.preprocessing.text.Tokenizer(num_words=20000)
tokenizer.fit_on_texts(sequences)
V = len(tokenizer.word_index)+1
seq = pad_sequences(tokenizer.texts_to_sequences(sequences), maxlen=30)
train_X, train_y = seq[:, :-1], tf.keras.utils.to_categorical(seq[:, -1], num_classes=V)
print(train_X.shape, train_y.shape)
(57446, 29) (57446, 20761)
Define rnn model to generate song lyrics¶
D = 512
#Simple RNN
T = train_X.shape[1]
i = Input(shape=(T,))
x = Embedding(V, D)(i)
x = Dropout(0.2)(x)
x = SimpleRNN(150)(x)
x = Dense(V, activation="softmax")(x)
rnn_model = Model(i, x)
adam = tf.keras.optimizers.Adam(learning_rate=0.001)
rnn_model.compile(optimizer=adam, metrics=["accuracy"], loss="categorical_crossentropy")
rnn_model.summary()
WARNING:absl:At this time, the v2.11+ optimizer `tf.keras.optimizers.Adam` runs slowly on M1/M2 Macs, please use the legacy Keras optimizer instead, located at `tf.keras.optimizers.legacy.Adam`. WARNING:absl:There is a known slowdown when using v2.11+ Keras optimizers on M1/M2 Macs. Falling back to the legacy Keras optimizer, i.e., `tf.keras.optimizers.legacy.Adam`.
Model: "model_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_4 (InputLayer) [(None, 29)] 0
embedding_3 (Embedding) (None, 29, 512) 10629632
dropout_3 (Dropout) (None, 29, 512) 0
simple_rnn_3 (SimpleRNN) (None, 150) 99450
dense_3 (Dense) (None, 20761) 3134911
=================================================================
Total params: 13863993 (52.89 MB)
Trainable params: 13863993 (52.89 MB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
from keras.callbacks import ReduceLROnPlateau , EarlyStopping
from tensorflow.keras.optimizers import Adam
import warnings
warnings.filterwarnings('ignore')
# Set a learning rate annealer
learning_rate_reduction = ReduceLROnPlateau(monitor='accuracy',
patience=3,
verbose=1,
factor=0.5,
min_lr=0.00001)
es = EarlyStopping(monitor="loss", mode="min", verbose=1, patience=20)
Train Model¶
rnn_r = rnn_model.fit(train_X, train_y, epochs=100,callbacks=[learning_rate_reduction,es])
Epoch 1/100 1796/1796 [==============================] - 77s 43ms/step - loss: 7.8723 - accuracy: 0.0312 - lr: 0.0010 Epoch 2/100 1796/1796 [==============================] - 68s 38ms/step - loss: 6.3590 - accuracy: 0.0967 - lr: 0.0010 Epoch 3/100 1796/1796 [==============================] - 68s 38ms/step - loss: 5.1645 - accuracy: 0.1927 - lr: 0.0010 Epoch 4/100 1796/1796 [==============================] - 68s 38ms/step - loss: 4.1408 - accuracy: 0.3083 - lr: 0.0010 Epoch 5/100 1796/1796 [==============================] - 71s 39ms/step - loss: 3.3173 - accuracy: 0.4255 - lr: 0.0010 Epoch 6/100 1796/1796 [==============================] - 69s 38ms/step - loss: 2.6795 - accuracy: 0.5253 - lr: 0.0010 Epoch 7/100 1796/1796 [==============================] - 77s 43ms/step - loss: 2.2064 - accuracy: 0.6011 - lr: 0.0010 Epoch 8/100 1796/1796 [==============================] - 75s 42ms/step - loss: 1.8788 - accuracy: 0.6526 - lr: 0.0010 Epoch 9/100 1796/1796 [==============================] - 68s 38ms/step - loss: 1.6466 - accuracy: 0.6891 - lr: 0.0010 Epoch 10/100 1796/1796 [==============================] - 69s 38ms/step - loss: 1.4738 - accuracy: 0.7202 - lr: 0.0010 Epoch 11/100 1796/1796 [==============================] - 69s 38ms/step - loss: 1.3444 - accuracy: 0.7412 - lr: 0.0010 Epoch 12/100 1796/1796 [==============================] - 69s 38ms/step - loss: 1.2430 - accuracy: 0.7586 - lr: 0.0010 Epoch 13/100 1796/1796 [==============================] - 69s 38ms/step - loss: 1.1660 - accuracy: 0.7720 - lr: 0.0010 Epoch 14/100 1796/1796 [==============================] - 71s 39ms/step - loss: 1.1042 - accuracy: 0.7826 - lr: 0.0010 Epoch 15/100 1796/1796 [==============================] - 69s 39ms/step - loss: 1.0555 - accuracy: 0.7911 - lr: 0.0010 Epoch 16/100 1796/1796 [==============================] - 70s 39ms/step - loss: 1.0136 - accuracy: 0.7968 - lr: 0.0010 Epoch 17/100 1796/1796 [==============================] - 68s 38ms/step - loss: 0.9797 - accuracy: 0.8038 - lr: 0.0010 Epoch 18/100 1796/1796 [==============================] - 68s 38ms/step - loss: 0.9486 - accuracy: 0.8091 - lr: 0.0010 Epoch 19/100 1796/1796 [==============================] - 69s 38ms/step - loss: 0.9283 - accuracy: 0.8127 - lr: 0.0010 Epoch 20/100 1796/1796 [==============================] - 69s 38ms/step - loss: 0.9123 - accuracy: 0.8148 - lr: 0.0010 Epoch 21/100 1796/1796 [==============================] - 69s 38ms/step - loss: 0.8914 - accuracy: 0.8197 - lr: 0.0010 Epoch 22/100 1796/1796 [==============================] - 69s 38ms/step - loss: 0.8821 - accuracy: 0.8184 - lr: 0.0010 Epoch 23/100 1796/1796 [==============================] - 69s 38ms/step - loss: 0.8661 - accuracy: 0.8225 - lr: 0.0010 Epoch 24/100 1796/1796 [==============================] - 69s 38ms/step - loss: 0.8569 - accuracy: 0.8234 - lr: 0.0010 Epoch 25/100 1796/1796 [==============================] - 73s 41ms/step - loss: 0.8478 - accuracy: 0.8248 - lr: 0.0010 Epoch 26/100 1796/1796 [==============================] - 69s 38ms/step - loss: 0.8434 - accuracy: 0.8248 - lr: 0.0010 Epoch 27/100 1796/1796 [==============================] - 69s 38ms/step - loss: 0.8336 - accuracy: 0.8262 - lr: 0.0010 Epoch 28/100 1796/1796 [==============================] - 68s 38ms/step - loss: 0.8308 - accuracy: 0.8264 - lr: 0.0010 Epoch 29/100 1796/1796 [==============================] - 68s 38ms/step - loss: 0.8314 - accuracy: 0.8264 - lr: 0.0010 Epoch 30/100 1796/1796 [==============================] - 68s 38ms/step - loss: 0.8174 - accuracy: 0.8288 - lr: 0.0010 Epoch 31/100 1796/1796 [==============================] - 68s 38ms/step - loss: 0.8200 - accuracy: 0.8269 - lr: 0.0010 Epoch 32/100 1796/1796 [==============================] - 70s 39ms/step - loss: 0.8217 - accuracy: 0.8261 - lr: 0.0010 Epoch 33/100 1796/1796 [==============================] - ETA: 0s - loss: 0.8195 - accuracy: 0.8261 Epoch 33: ReduceLROnPlateau reducing learning rate to 0.0005000000237487257. 1796/1796 [==============================] - 69s 39ms/step - loss: 0.8195 - accuracy: 0.8261 - lr: 0.0010 Epoch 34/100 1796/1796 [==============================] - 69s 39ms/step - loss: 0.6719 - accuracy: 0.8567 - lr: 5.0000e-04 Epoch 35/100 1796/1796 [==============================] - 70s 39ms/step - loss: 0.6279 - accuracy: 0.8675 - lr: 5.0000e-04 Epoch 36/100 1796/1796 [==============================] - 69s 39ms/step - loss: 0.6137 - accuracy: 0.8689 - lr: 5.0000e-04 Epoch 37/100 1796/1796 [==============================] - 77s 43ms/step - loss: 0.6077 - accuracy: 0.8713 - lr: 5.0000e-04 Epoch 38/100 1796/1796 [==============================] - 71s 40ms/step - loss: 0.6003 - accuracy: 0.8730 - lr: 5.0000e-04 Epoch 39/100 1796/1796 [==============================] - 71s 40ms/step - loss: 0.5926 - accuracy: 0.8756 - lr: 5.0000e-04 Epoch 40/100 1796/1796 [==============================] - 70s 39ms/step - loss: 0.5874 - accuracy: 0.8769 - lr: 5.0000e-04 Epoch 41/100 1796/1796 [==============================] - 75s 42ms/step - loss: 0.5815 - accuracy: 0.8774 - lr: 5.0000e-04 Epoch 42/100 1796/1796 [==============================] - 69s 39ms/step - loss: 0.5789 - accuracy: 0.8777 - lr: 5.0000e-04 Epoch 43/100 1796/1796 [==============================] - 69s 39ms/step - loss: 0.5774 - accuracy: 0.8784 - lr: 5.0000e-04 Epoch 44/100 1796/1796 [==============================] - 70s 39ms/step - loss: 0.5724 - accuracy: 0.8801 - lr: 5.0000e-04 Epoch 45/100 1796/1796 [==============================] - 69s 38ms/step - loss: 0.5708 - accuracy: 0.8796 - lr: 5.0000e-04 Epoch 46/100 1796/1796 [==============================] - 69s 39ms/step - loss: 0.5706 - accuracy: 0.8811 - lr: 5.0000e-04 Epoch 47/100 1796/1796 [==============================] - 69s 39ms/step - loss: 0.5640 - accuracy: 0.8823 - lr: 5.0000e-04 Epoch 48/100 1796/1796 [==============================] - 70s 39ms/step - loss: 0.5660 - accuracy: 0.8805 - lr: 5.0000e-04 Epoch 49/100 1796/1796 [==============================] - 69s 38ms/step - loss: 0.5609 - accuracy: 0.8825 - lr: 5.0000e-04 Epoch 50/100 1796/1796 [==============================] - 69s 38ms/step - loss: 0.5597 - accuracy: 0.8827 - lr: 5.0000e-04 Epoch 51/100 1796/1796 [==============================] - 69s 39ms/step - loss: 0.5633 - accuracy: 0.8806 - lr: 5.0000e-04 Epoch 52/100 1796/1796 [==============================] - 70s 39ms/step - loss: 0.5603 - accuracy: 0.8813 - lr: 5.0000e-04 Epoch 53/100 1795/1796 [============================>.] - ETA: 0s - loss: 0.5550 - accuracy: 0.8823 Epoch 53: ReduceLROnPlateau reducing learning rate to 0.0002500000118743628. 1796/1796 [==============================] - 69s 39ms/step - loss: 0.5551 - accuracy: 0.8822 - lr: 5.0000e-04 Epoch 54/100 1796/1796 [==============================] - 68s 38ms/step - loss: 0.5081 - accuracy: 0.8915 - lr: 2.5000e-04 Epoch 55/100 1796/1796 [==============================] - 68s 38ms/step - loss: 0.4948 - accuracy: 0.8941 - lr: 2.5000e-04 Epoch 56/100 1796/1796 [==============================] - 68s 38ms/step - loss: 0.4921 - accuracy: 0.8942 - lr: 2.5000e-04 Epoch 57/100 1796/1796 [==============================] - 68s 38ms/step - loss: 0.4888 - accuracy: 0.8947 - lr: 2.5000e-04 Epoch 58/100 1796/1796 [==============================] - 74s 41ms/step - loss: 0.4840 - accuracy: 0.8966 - lr: 2.5000e-04 Epoch 59/100 1796/1796 [==============================] - 67s 37ms/step - loss: 0.4826 - accuracy: 0.8973 - lr: 2.5000e-04 Epoch 60/100 1796/1796 [==============================] - 67s 37ms/step - loss: 0.4815 - accuracy: 0.8959 - lr: 2.5000e-04 Epoch 61/100 1796/1796 [==============================] - 67s 37ms/step - loss: 0.4800 - accuracy: 0.8972 - lr: 2.5000e-04 Epoch 62/100 1796/1796 [==============================] - ETA: 0s - loss: 0.4807 - accuracy: 0.8953 Epoch 62: ReduceLROnPlateau reducing learning rate to 0.0001250000059371814. 1796/1796 [==============================] - 66s 37ms/step - loss: 0.4807 - accuracy: 0.8953 - lr: 2.5000e-04 Epoch 63/100 1796/1796 [==============================] - 66s 37ms/step - loss: 0.4575 - accuracy: 0.9004 - lr: 1.2500e-04 Epoch 64/100 1796/1796 [==============================] - 67s 37ms/step - loss: 0.4522 - accuracy: 0.9015 - lr: 1.2500e-04 Epoch 65/100 1796/1796 [==============================] - 67s 37ms/step - loss: 0.4504 - accuracy: 0.9022 - lr: 1.2500e-04 Epoch 66/100 1796/1796 [==============================] - 67s 37ms/step - loss: 0.4494 - accuracy: 0.9013 - lr: 1.2500e-04 Epoch 67/100 1796/1796 [==============================] - 68s 38ms/step - loss: 0.4479 - accuracy: 0.9027 - lr: 1.2500e-04 Epoch 68/100 1796/1796 [==============================] - 69s 39ms/step - loss: 0.4475 - accuracy: 0.9024 - lr: 1.2500e-04 Epoch 69/100 1796/1796 [==============================] - 71s 39ms/step - loss: 0.4481 - accuracy: 0.9019 - lr: 1.2500e-04 Epoch 70/100 1795/1796 [============================>.] - ETA: 0s - loss: 0.4464 - accuracy: 0.9028 Epoch 70: ReduceLROnPlateau reducing learning rate to 6.25000029685907e-05. 1796/1796 [==============================] - 69s 39ms/step - loss: 0.4463 - accuracy: 0.9028 - lr: 1.2500e-04 Epoch 71/100 1796/1796 [==============================] - 70s 39ms/step - loss: 0.4324 - accuracy: 0.9057 - lr: 6.2500e-05 Epoch 72/100 1796/1796 [==============================] - 70s 39ms/step - loss: 0.4339 - accuracy: 0.9063 - lr: 6.2500e-05 Epoch 73/100 1796/1796 [==============================] - 69s 38ms/step - loss: 0.4335 - accuracy: 0.9057 - lr: 6.2500e-05 Epoch 74/100 1796/1796 [==============================] - 69s 38ms/step - loss: 0.4329 - accuracy: 0.9050 - lr: 6.2500e-05 Epoch 75/100 1796/1796 [==============================] - ETA: 0s - loss: 0.4316 - accuracy: 0.9056 Epoch 75: ReduceLROnPlateau reducing learning rate to 3.125000148429535e-05. 1796/1796 [==============================] - 69s 38ms/step - loss: 0.4316 - accuracy: 0.9056 - lr: 6.2500e-05 Epoch 76/100 1796/1796 [==============================] - 69s 39ms/step - loss: 0.4259 - accuracy: 0.9069 - lr: 3.1250e-05 Epoch 77/100 1796/1796 [==============================] - 69s 38ms/step - loss: 0.4238 - accuracy: 0.9077 - lr: 3.1250e-05 Epoch 78/100 1796/1796 [==============================] - 68s 38ms/step - loss: 0.4241 - accuracy: 0.9078 - lr: 3.1250e-05 Epoch 79/100 1796/1796 [==============================] - 70s 39ms/step - loss: 0.4226 - accuracy: 0.9077 - lr: 3.1250e-05 Epoch 80/100 1796/1796 [==============================] - ETA: 0s - loss: 0.4229 - accuracy: 0.9077 Epoch 80: ReduceLROnPlateau reducing learning rate to 1.5625000742147677e-05. 1796/1796 [==============================] - 70s 39ms/step - loss: 0.4229 - accuracy: 0.9077 - lr: 3.1250e-05 Epoch 81/100 1796/1796 [==============================] - 67s 37ms/step - loss: 0.4200 - accuracy: 0.9086 - lr: 1.5625e-05 Epoch 82/100 1796/1796 [==============================] - 67s 37ms/step - loss: 0.4204 - accuracy: 0.9083 - lr: 1.5625e-05 Epoch 83/100 1796/1796 [==============================] - 67s 38ms/step - loss: 0.4214 - accuracy: 0.9076 - lr: 1.5625e-05 Epoch 84/100 1796/1796 [==============================] - 67s 37ms/step - loss: 0.4196 - accuracy: 0.9088 - lr: 1.5625e-05 Epoch 85/100 1796/1796 [==============================] - 68s 38ms/step - loss: 0.4177 - accuracy: 0.9096 - lr: 1.5625e-05 Epoch 86/100 1796/1796 [==============================] - 70s 39ms/step - loss: 0.4189 - accuracy: 0.9089 - lr: 1.5625e-05 Epoch 87/100 1796/1796 [==============================] - 68s 38ms/step - loss: 0.4195 - accuracy: 0.9087 - lr: 1.5625e-05 Epoch 88/100 1795/1796 [============================>.] - ETA: 0s - loss: 0.4191 - accuracy: 0.9088 Epoch 88: ReduceLROnPlateau reducing learning rate to 1e-05. 1796/1796 [==============================] - 68s 38ms/step - loss: 0.4191 - accuracy: 0.9088 - lr: 1.5625e-05 Epoch 89/100 1796/1796 [==============================] - 69s 39ms/step - loss: 0.4175 - accuracy: 0.9088 - lr: 1.0000e-05 Epoch 90/100 1796/1796 [==============================] - 69s 38ms/step - loss: 0.4175 - accuracy: 0.9094 - lr: 1.0000e-05 Epoch 91/100 1796/1796 [==============================] - 69s 38ms/step - loss: 0.4179 - accuracy: 0.9085 - lr: 1.0000e-05 Epoch 92/100 1796/1796 [==============================] - 69s 39ms/step - loss: 0.4169 - accuracy: 0.9095 - lr: 1.0000e-05 Epoch 93/100 1796/1796 [==============================] - 73s 41ms/step - loss: 0.4173 - accuracy: 0.9094 - lr: 1.0000e-05 Epoch 94/100 1796/1796 [==============================] - 68s 38ms/step - loss: 0.4162 - accuracy: 0.9093 - lr: 1.0000e-05 Epoch 95/100 1796/1796 [==============================] - 68s 38ms/step - loss: 0.4179 - accuracy: 0.9082 - lr: 1.0000e-05 Epoch 96/100 1796/1796 [==============================] - 68s 38ms/step - loss: 0.4165 - accuracy: 0.9096 - lr: 1.0000e-05 Epoch 97/100 1796/1796 [==============================] - 69s 38ms/step - loss: 0.4175 - accuracy: 0.9087 - lr: 1.0000e-05 Epoch 98/100 1796/1796 [==============================] - 69s 38ms/step - loss: 0.4170 - accuracy: 0.9087 - lr: 1.0000e-05 Epoch 99/100 1796/1796 [==============================] - 69s 38ms/step - loss: 0.4162 - accuracy: 0.9090 - lr: 1.0000e-05 Epoch 100/100 1796/1796 [==============================] - 69s 38ms/step - loss: 0.4182 - accuracy: 0.9079 - lr: 1.0000e-05
Define scoring metrics for lyrical output based on readabilty and rhyme frequency¶
def calc_readability(input_lines):
avg_readability = 0
for line in input_lines:
avg_readability += textstat.automated_readability_index(line)
return avg_readability / len(input_lines)
def calc_rhyme_density(lines):
total_syllables = 0
rhymed_syllables = 0
for line in lines:
for word in line.split():
p = pronouncing.phones_for_word(word)
if len(p) == 0:
break
syllables = pronouncing.syllable_count(p[0])
total_syllables += syllables
has_rhyme = False
for rhyme in pronouncing.rhymes(word):
if has_rhyme:
break
for idx, r_line in enumerate(lines):
if idx > 4:
break
if rhyme in r_line:
rhymed_syllables += syllables
has_rhyme = True
break
return rhymed_syllables/total_syllables
def score_line(input_line, artists_lines, artists_avg_readability, artists_avg_rhyme_idx):
gen_readability = textstat.automated_readability_index(input_line)
gen_rhyme_idx = calc_rhyme_density(input_line)
comp_lines = compare_lines(input_line, artists_lines)
# Scores based off readability, rhyme index, and originality. The lower the score the better.
line_score = (artists_avg_readability - gen_readability) + (artists_avg_rhyme_idx - gen_rhyme_idx) + comp_lines
return line_score
def compare_lines(input_line, artists_lines):
'''
input_lines are the fire lines our AI generates
artists_lines are the original lines for the artist
The lower the score the better! We want unique lines
'''
# Converts sentences to matrix of token counts
avg_dist = 0
total_counted = 0
for line in artists_lines:
v = CountVectorizer()
# Vectorize the sentences
word_vector = v.fit_transform([input_line, line])
# Compute the cosine distance between the sentence vectors
cos_dist = 1-pdist(word_vector.toarray(), 'cosine')[0]
if not math.isnan(cos_dist):
avg_dist += 1-pdist(word_vector.toarray(), 'cosine')[0]
total_counted += 1
return avg_dist/total_counted
Run model¶
Model is run based on seed sentence. Seed sentence combines final words from previous line and markov generated sentence. Generates lines with syllable count close to required by a haiku.
def generate_line(seed_phrase, model, length_of_line):
seed_words = ' '.join(seed_phrase.split(' ')[-2:])
syl = 0 + syllables.estimate(seed_words)
while syl < length_of_line:
seed_tokens = pad_sequences(tokenizer.texts_to_sequences([seed_phrase]), maxlen=29)
output_p = model.predict(seed_tokens)
output_word = np.argmax(output_p, axis=1)[0]-1
syl += syllables.estimate(str(list(tokenizer.word_index.items())[output_word][0]))
seed_phrase += " " + str(list(tokenizer.word_index.items())[output_word][0])
return seed_phrase
Generate Haiku¶
Generate haiku by using a user defined first line. Generate the next two lines by creating a defined number of attempts and selecting the highest scoring option.
def generate_haiku( model, intro_line, artists_lines, length_of_line, length_of_song=20, min_score_threshold=-0.2, max_score_threshold=0.2, tries=5):
artists_avg_readability = calc_readability(artists_lines)
artists_avg_rhyme_idx = calc_rhyme_density(artists_lines)
fire_song = [intro_line + " "]
line_lengths = [7, 5]
cur_tries = 0
candidate_lines = []
while len(fire_song) < 3:
try:
seed_sentence = markov_model.make_sentence(tries=100).split(" ")
print('Seed Sentence: ', seed_sentence)
seed_sentence = " ".join(fire_song[-1].split(' ')[-3:]) + " ".join(seed_sentence[:2])
except:
pass
line = generate_line(seed_sentence, model, line_lengths[len(fire_song)-1])
print(syllables.estimate(' '.join(line.split(' ')[2:])))
if (syllables.estimate(' '.join(line.split(' ')[2:])) == line_lengths[len(fire_song) - 1]):
cur_tries += 1
print(cur_tries)
#print(line)
line_score = score_line(line, lyrics, artists_avg_readability, artists_avg_rhyme_idx)
candidate_lines.append((line_score, line))
if line_score <= max_score_threshold and line_score >= min_score_threshold and (syllables.estimate(' '.join(line.split(' ')[3:])) == line_lengths[len(fire_song) - 1]):
fire_song.append(' '.join(line.split(' ')[2:]) + " ")
cur_tries = 0
print("Generated line: ", len(fire_song))
if cur_tries >= tries:
lowest_score = np.Infinity
best_line = ""
for line in candidate_lines:
if (line[0] < lowest_score) and (syllables.estimate(' '.join(line[1].split(' ')[2:])) == line_lengths[len(fire_song) - 1]):
best_line = line[1]
candidate_lines = []
fire_song.append(' '.join(best_line.split(' ')[2:]) + " ")
print("Generated line: ", len(fire_song))
cur_tries = 0
print("Generated song with avg rhyme density: ", calc_rhyme_density(fire_song), "and avg readability of: ", calc_readability(fire_song))
return fire_song
first_line = 'The earth is burning'
rnn = generate_haiku( rnn_model, first_line, lyrics, length_of_line =12 , tries=10)
Seed Sentence: ['No', 'constellations', 'in', 'the', 'sky,', "there's", 'magic', 'in', 'the', 'air', 'and', 'echo'] 1/1 [==============================] - 0s 15ms/step 1/1 [==============================] - 0s 12ms/step 7 1 Seed Sentence: ['Like', 'a', 'child', 'I', 'fear', 'the', 'end,', 'but', 'for', 'what', 'reason?'] 1/1 [==============================] - 0s 11ms/step 1/1 [==============================] - 0s 11ms/step 7 2 Seed Sentence: ['Or', 'do', 'they', 'just', 'think', "I'm", 'a', 'piece', 'of', 'me', 'Or', 'lead', 'me', 'to', 'the', 'ages,', 'the', 'darkness', 'is', 'calling'] 1/1 [==============================] - 0s 10ms/step 1/1 [==============================] - 0s 12ms/step 1/1 [==============================] - 0s 11ms/step 1/1 [==============================] - 0s 11ms/step 8 Seed Sentence: ['Their', 'future', 'in', 'the', 'red'] 1/1 [==============================] - 0s 10ms/step 1/1 [==============================] - 0s 10ms/step 1/1 [==============================] - 0s 11ms/step 10 Seed Sentence: ['I', 'will', 'face', 'my', 'afflictionsAnd', 'I', 'see', 'stars,', 'I', 'hear', 'the', 'voices', 'of', 'steel', 'in', 'the', 'night.', 'I', 'am', 'He.', 'He', 'the', 'carnal.', 'He', 'the', 'ravage.', 'He', 'the', 'superior.'] 1/1 [==============================] - 0s 10ms/step 1/1 [==============================] - 0s 11ms/step 1/1 [==============================] - 0s 11ms/step 7 3 Seed Sentence: ['I', "can't", 'even', 'see', 'me'] 1/1 [==============================] - 0s 10ms/step 1/1 [==============================] - 0s 11ms/step 1/1 [==============================] - 0s 11ms/step 1/1 [==============================] - 0s 10ms/step 8 Seed Sentence: ['Scattered', 'across', 'in', 'black', 'all', 'I', 'see', 'is', 'dead', 'words'] 1/1 [==============================] - 0s 10ms/step 1/1 [==============================] - 0s 11ms/step 9 Seed Sentence: ['But', 'I', "won't", 'let', 'go'] 1/1 [==============================] - 0s 10ms/step 1/1 [==============================] - 0s 11ms/step 1/1 [==============================] - 0s 15ms/step 1/1 [==============================] - 0s 11ms/step 7 4 Seed Sentence: ['And', 'I', 'shall', 'be', 'the', 'death', 'of', 'me', 'keeps', 'me', 'alive'] 1/1 [==============================] - 0s 10ms/step 1/1 [==============================] - 0s 11ms/step 1/1 [==============================] - 0s 11ms/step 1/1 [==============================] - 0s 11ms/step 8 Seed Sentence: ['Expose', 'my', 'cage,', 'and', 'I', 'will', 'never', 'win.'] 1/1 [==============================] - 0s 10ms/step 1/1 [==============================] - 0s 11ms/step 7 5 Seed Sentence: ['Coming', 'from', 'each', 'side', 'to', 'keep', 'them', 'docile'] 1/1 [==============================] - 0s 11ms/step 1/1 [==============================] - 0s 11ms/step 1/1 [==============================] - 0s 11ms/step 7 6 Seed Sentence: ['Not', 'stopping', 'till', "you're", 'fucking', 'dead', '(Iron', 'Balls!)'] 1/1 [==============================] - 0s 10ms/step 1/1 [==============================] - 0s 11ms/step 1/1 [==============================] - 0s 11ms/step 8 Seed Sentence: ['Ornamentation', 'woven', 'with', 'their', 'faces.', '(They', 'are', 'in', 'the', 'presence', 'of', 'a', 'sadly', 'absent', 'glory'] 7 7 Seed Sentence: ['Our', 'enemies', 'in', 'the', 'eyes'] 1/1 [==============================] - 0s 10ms/step 1/1 [==============================] - 0s 11ms/step 1/1 [==============================] - 0s 11ms/step 7 8 Seed Sentence: ['When', 'I', 'close', 'my', 'eyes,', 'and', 'give', 'you', 'to', 'the', 'bone', 'Although', "she's", 'gone,', 'my', 'love', 'will', 'never', "dieI'll", 'ever', 'have', 'to', 'paradise'] 1/1 [==============================] - 0s 11ms/step 1/1 [==============================] - 0s 11ms/step 1/1 [==============================] - 0s 12ms/step 7 9 Seed Sentence: ['Heavy', 'Metal,', 'rock', 'n', 'roll', 'we', 'praise'] 1/1 [==============================] - 0s 14ms/step 7 10 Generated line: 2 Seed Sentence: ['this', 'is', 'not', 'my', 'home'] 1/1 [==============================] - 0s 10ms/step 1/1 [==============================] - 0s 11ms/step 7 Seed Sentence: ['On', 'the', 'day', 'I', 'die!'] 1/1 [==============================] - 0s 10ms/step 1/1 [==============================] - 0s 15ms/step 1/1 [==============================] - 0s 11ms/step 6 Generated line: 3 Generated song with avg rhyme density: 0.25 and avg readability of: 2.8000000000000003
print("Song Generated with SimpleRNN:")
for line in rnn:
print(line)
Song Generated with SimpleRNN: The earth is burning Heavy Metal, disaster On the you scream love
In the realm of creative experimentation, this project unites neural networks, the evocative world of metal lyrics, and the succinct beauty of haikus. However, it's important to recognize that the resulting haikus, while aesthetically appealing and steeped in the imagery of metal motifs, lack profound meaning or genuine depth.
The allure of the haikus predominantly emerges from the masterful arrangement of words, drawing inspiration from the thematic elements of metal culture. The neural network's ability to replicate the structural integrity of haikus, combined with the raw intensity often found in metal lyrics, leads to the formation of verses that resonate with the senses.
Yet, beneath the captivating surface, these haikus are a testament to the challenges of imbuing AI-generated art with true human-like understanding. While they mimic the form and language of both metal lyrics and traditional haikus, they ultimately reveal the current limitations of AI in grasping nuanced emotional contexts and conveying authentic artistic expression.
Scraping code can be found here.