Wk03 Learning Journal: HW 1 Code Review
I reviewed the Hangman code from the members of group 8: Jael, Sylvia, David, and Quoc.
The reviews I wrote:
Review 1: Jael
Variable names: Variable and method names are clear and descriptive, so it is easy for me to understand the game state.
Logic: The core Hangman logic works, but some details do not fully match the Javadoc or assignment spec, and several //todo comments show that parts were left unfinished.
Unused imports or warnings: All imports are used correctly; I don’t see any obvious unused imports.
Clear formatting: Formatting and indentation are mostly consistent, but the remaining //todo comments make the file feel a bit unfinished.
Comments and Javadoc: There is good Javadoc coverage, but the printed messages and some behaviors do not always match what the comments describe.
Overall: I think Jael mainly struggled with finishing the last polishing steps so that the code behavior, printed messages, and Javadoc match perfectly and all the TODO markers are resolved.
Review 2: Sylvia
Variable names: Names like secretWord, guessedLetters, and remainingGuesses are meaningful and make the code easy for me to follow.
Logic: The main game flow is solid, but there are a few edge cases—such as running out of words, asking for hints when all letters are already guessed, or mixing upper and lower case—that could cause problems.
Unused imports or warnings: All imports are used; I do not see obvious unused imports, though readFile could use a try-with-resources block for safety.
Clear formatting: The code is neatly formatted, and Javadoc blocks are organized and readable.
Comments and Javadoc: Most methods have clear Javadoc and comments that match the intent of the code, but there are a few minor discrepancies (for example, displayGameState()’s Javadoc says it “prints and returns” the state, but the current method only returns the string).
Overall: I think Sylvia mostly struggled with handling tricky edge cases and making sure every method’s behavior exactly matches the Javadoc and the assignment specification.
Review 3: David
Variable names: Variable and method names are clear and descriptive, and I can understand their purpose quickly.
Logic: The Hangman logic is mostly correct, but some methods do not fully follow the exact conditions and message wording described in the Javadoc.
Unused imports or warnings: Imports are used and appropriate; the main concern is that some methods assume fields are already initialized (for example, assuming chooseWord() was called before getHint() or makeGuess()).
Clear formatting: Formatting is clean and consistent, and the structure of the class is easy to read.
Comments and Javadoc: There is detailed Javadoc, but several printed strings and small behaviors in the code differ from what the Javadoc says.
Overall: I think David struggled most with keeping the implementation perfectly aligned with the written specification—especially making sure that the messages, conditions, and field usage exactly match the Javadoc and the tests.
Review 4: Quoc
Variable names: Variable and method names are meaningful and concise, which makes the code easy for me to understand.
Logic: The overall logic for choosing words, tracking guesses, hints, and scoring is clean and matches the Hangman requirements well.
Unused imports or warnings: There are no unused imports, and the file is tidy; some defensive checks could be added around secretWord and guessedWord in case methods are called too early.
Clear formatting: Formatting and spacing are consistent, and related methods are grouped in a logical order.
Comments and Javadoc: Most public methods have clear Javadoc that describes what they do, although the top header comment still looks like a template instead of a finished description.
Overall: I think Quoc’s main struggle was polishing the last few details, such as finalizing header comments and adding defensive checks so the methods are robust even if they are called in an unexpected order.
Feedback I received about my code from my peers:
From my peers’ comments, I learned that my Hangman code is generally seen as well written and easy to follow. They said my variable names are descriptive and clear, my overall game logic is solid, and my Javadoc is detailed and well-structured. They also liked that my chooseWord method handles edge cases thoughtfully and avoids reusing guessed words, and that all my imports are necessary with no warnings. The main improvements they suggested were small polish items: updating the class header, making sure my readFile catch block prints the exact error message described in the spec, and handling case sensitivity more carefully in makeGuess so letter comparisons are always consistent. Overall, the feedback was that my program is strong, and I mostly need to tighten up a few details for accuracy and robustness.
When I reviewed my teammates’ Hangman implementations, I noticed a few clear trends. Everyone used meaningful variable and method names, so readability was consistently good across all four solutions. The core Hangman logic was mostly correct, but a lot of us struggled with the last bits like getting printed messages and Javadoc wording to match the assignment spec and unit tests exactly. I also saw things like mixing upper- and lower-case letters. In short, our group did well on the main logic, but many of us, including myself, found that matching the spec perfectly and handling tricky edge cases was the hardest part.
Answers to the following
What improvements would you make to your code/what was suggested?
There are some small issues of polish that my peers said I should fix with my code for example, I should update the author and date at the top of the file, I should ensure that the readFile catch block outputs the full error message specified in the spec, and I should better control case sensitivity in the makeGuess method so the comparisons are always consistent. I should also review my Javadoc and verify that all of my descriptions and printed messages accurately represent what the code is doing, specifically in the case of edge conditions.
Which unit tests were the hardest to pass?
The unit tests that were the most difficult to pass were those edge cases and the very specific wording of the printed output. For Example, how getHint() works when certain letters have been already guessed, or how isGameOver() works when the last guess simultaneously both wins the game and exhausts the available guesses. These tests were extremely particular about the wording and behavior, so even minor errors resulted in test failures.
How do the existing tests function and could they be improved?
There are ways the unit tests could be improved, primarily by adding more comments explaining why the different scenarios matter, and by grouping related tests so the developer has an easier time seeing the logic behind them.
Do the existing unit tests cover the full range of the sub classes?
The tests for the Hangman class itself do test much of the main functionality including selecting words, picking a word, guessing letters, determining whether the game has won or lost and providing hints.
How would you change the unit tests?
I would also add some tests that clearly check for invalid or unexpected states and add comments that explain the purpose of each test, so students can better understand what behavior is being verified.
What did you struggle with?
I struggled the most with aligning all the small details of my code with the assignment specification and the tests. It was not hard to get the basic logic working, but it took more effort to make everything match.
What did one of your teammates struggle with?
One of my teammates, including myself, struggled with finishing the last polish on their code, such as removing //todo comments and making sure that printed messages and error strings exactly matched the Javadoc and the spec. Their core Hangman logic worked, but those final clean-up steps kept the code from feeling completely finished.
Was any part of the code a struggle for YOU?
Yes, I found methods like getHint() and isGameOver() challenging because they had to handle edge cases correctly and match the tests very closely.
Was any part of writing the code easy for YOU?
The easier parts for me were setting up the fields, writing the constructors, and implementing the basic loop in makeGuess to reveal letters and update the score.
What was your biggest HW1 victory?
My biggest HW1 victory was getting my Hangman code to be readable and mostly aligned with the tests, and then hearing from my peers that it was easy to follow. That made me feel more confident about my ability to read specs carefully and write code that other people can understand.
This article does a great job of highlighting the complexities students face when studying psychology, from understanding theories to analyzing human behavior. Managing assignments alongside other responsibilities can feel overwhelming. That’s why many students look for help with psychology assignment to gain better clarity and stay on track. However, it’s important to use such support as a learning aid rather than a shortcut. Building critical thinking and research skills is essential for long-term academic success in psychology.
ReplyDelete